saviourmachine said:
Aah. The basic concepts about matrix multiplication etc, I know.
Good, that's all you really need to know. If you know that, then everything else is really nothing more than logic (you can write out and look at the details of that multiplication). The really important aspect of matrix multiplication (in so far as physics is concerned: i.e., the reason we will want to use them) is that it is possible to construct "anti-commutating" matrices. Anti-commutation (a*b=-b*a) allows us to establish a very valuable "logical" relationships: i.e., we can define an expression (a symbol for something) consistent with the concept of multiplication where (ab-ba) is not zero. The true value of being able to do that is that it allows us to write some very complex relationships in a manner which appears[/color] to be simple. I don't know; could that be called the essence of "reductionism"?

It seems to me that, if "reductionism" is expressing complex phenomena in simple terms, that is exactly what using "anti-commutation" is all about. I will make that clear a little further down the road.
Actually, when Dirac showed that "matrix mechanics" and "wave mechanics" were equivalent (leading to the notion that Dirac's "bra"-"ket" notation {this, < |, is a "bra" and this, | >, is a "ket"} expressed something fundamental about reality), the real essence of the thing is the ability to encompass mathematical expression of the phenomena (ab-ba) not being zero. In Dirac's notation that would be (|a><b| - <b|a>) not being zero. In his case, the symbols look quite different and confusion with ordinary "numbers" is impossible. But, as my interest is with the difference between what "really exists" and what "we presume exists", it is the relationship and the essence of reductionism which is important, not the notation.
That brings up mathematics. Mathematics is a language just like any other language (except for the care with which mathematicians have stripped it of inconsistencies). It has its grammar, syntax and vocabulary. The importance of a mathematical expression is the wide extent of rather exact communication. One should always remember that specialists in any field have a bad habit of developing "jargon" only understood by insiders (I think it's a ego protective measure) and specialty use of mathematics is as full of "jargon" as is any language. The "proper" notation is "jargon" and learning the proper "jargon" of a field is a waste of time if one fails to comprehend the essence of the concepts which gave rise to that "jargon". Sounding like you understand things is not equivalent to understanding them.
saviourmachine said:
I did my bachelor electrical engineering (e.m. waves etc). I'll say it, if something is too difficult for me. I don't know a thing about Heisenberg's matrix mechanics. I forgot a lot about Schrödinger's equation. It was thrown at me in a course about semiconductor physics.
The details of Heisenberg's matrix mechanics are not important at all unless one is interested in how the ideas of quantum mechanics arose historically. True relationships are seldom recognized by accident. One is usually led to them through examination of complex representations of things already known to be true. Once one begins to comprehend the structure of some complex representation, that structure itself often turns out to be a consequence of some simple ideas (reductionism again

).
Newtonian mechanics and calculus led to a long history of problem solving techniques and it was the attempt to standardize those techniques which eventually led to quantum mechanics. Just following that sequence and how ideas lead to other ideas is a fascinating study in itself. Every serious student should be taken through that development in detail just to understand how simplicity arises from complexity.
saviourmachine said:
Probability theory
Doctordick said:
Just as an aside, there is an individual out there who has some major difficulties with probability theory and is getting a reception roughly equivalent to the one I manage to generate with authorities. I have a strong suspicion his complaints are very rational.
Interesting. And that's not Stephen Jay Gould in "Full house" I guess...

Who is it? What is his/her message?
His name is
ThinhVanTran. I ran across him when I was surfing the web; I believe it was a post he made on one of the scientific forums hosted by Yahoo but I could be wrong as it was quite a while ago. He has a website at the link above. Since my work is a direct consequence of careful examinations of the process of obtaining valid expectations, I wanted to know exactly what his complaint was. After all, my work is essentially making an accurate estimate of probabilities (expectations) based on information without any knowledge of what the information represents (since "what it means" has to be derived from it and nothing else). So I took the trouble to get in touch with him.
He sent me a copy of his book and I corresponded with him for a while. I read his book very carefully and came to the conclusion that he may have something. I told him that his approach was wrong and that, if he wants to get his ideas published, he should lean on the experimental data and not worry about why it's wrong. Just show the details of his calculations and his assumptions and how the results differ from reality. Finish with the correction factor as a simple phenomenological correction. If others are having the same problem, the existence of the problem will become evident and others will use the correction factor. (He might tell them in an appendix how he came up with the correction but don't make a claim that it's the only explanation.) But he has already decided he knows where the problem is and wants everyone to recognize that he is right and they are wrong (actually, that sort of sounds like what people think I am doing :rolf:).
After reading his thesis, I was satisfied that it has no bearing on my work. Essentially what he says is that there is a constraint on the calculations which the professionals are not taking into account (having to do with the finite nature of reality). That constraint is that the probability calculations must agree with the historical results. Since the universe is finite, the historical results cannot contain some of those very very improbable possibilities. This fact skews the "correct" results away from the standard probability calculations. That is, the very probable events must be slightly more probable than probability theory says they are. He has created a correction factor based on that analysis and his calculated results agree with experience. The problem is that his correction looks too much like a phenomenological correction factor for some element being left out of his calculations and that is precisely the explanation the authorities jump to.
That doesn't bother me in the least as my whole attack is to find the consequences of requiring a "theory" to be consistent with the known information on a probabilistic basis. That is almost exactly the problem he is talking about. At any rate, I have been unable to sway him and his stuff will probably never be published. They won't publish him, instead they just tell him he is not doing his calculations correctly and that the distributions will never be exactly what he calculates anyway as that is the nature of probability. That is, "he's not an authority and can't possibly be correct".
Dot product
saviourmachine said:
I am clueless about what you're doing overhere. You defined an universal function: G, linking a list input numbers with a list results.
No, I didn't define any function at all. What I am doing is defining a way of representing a function (a notation or a symbol for a function). The central issue being that any and all conceivable functions can be so represented. The notation puts no constraint whatsoever on the function under discussion; the function itself is undefined, it is an unknown[/color]. "A is a function of B" means nothing more or less than the fact that, if B is known, A is known. Since anything can be represented by a set of numbers (a set of labels), both A and B can be seen as a set of numbers no matter what they are. In order to represent something significant, they must be properly defined; but, what is important here is that by approaching the issue in an abstract manner, we can put off the definition until later. The function is nothing except the answer to the question: if I have a specific B, what A do I have? The function (that specific answer) is "an unknown"; something we would like to know. It is thus a valid abstract representation of any question and its answer. Now don't confuse the words "valid" and "useful"; I said it was valid but I didn't say anything about its usefulness other than the accuracy of the abstract concept itself.
saviourmachine said:
You defined it's adjoint. Okay. And now you're defining a dot product of these functions. Does that have any meaning?
The process I am describing has only one purpose. The purpose is to define a universal representation of a procedure which will convert any arbitrary function into a function where A (the result) is a positive definite number. I do that because I want to express "expectations" (what I expect to be true). The point is that my expectations constitute something which can be represented by a probability: a positive definite number between zero and one. Except for magnitude (which is just a measure of size) the dot product I have defined always qualifies.
What I have shown is that any specific answer to any question which can be answered via a probability weighted yes/no answer can be represented by the dot product of \vec{\Psi} with its complex conjugate. That is, if a method of obtaining the answer exists, that method is a member of the set of all possible function. Obviously, if it isn't a member of the set of all possible functions, the method doesn't exist. If a method of answering the question doesn't exist, the question cannot be answered via any attack. What we are talking about is the problem of selecting the correct answer from the collection of all possible answers.
saviourmachine said:
And subsequently taking a volume integral.
1 \geq <br />
\frac{ { \int \int \cdots \int \vector{G}^\dagger \cdot \vector{G} \, d^n x} }{ { { \int \int \cdots \int \vector{G}^\dagger \cdot \vector{G} \, d^n x} } }<br />
\geq 0
Note that the structure of the numerator and the denominator are exactly the same. The only thing which makes them different is the comment; "so long as the denominator is summed (or integrated) over
all possibilities. What range the numerator is to be summed (or integrated) over is left open. Since the dot product is positive definite, the sum (or integral) is monotonically increasing real number no matter how the sum (or integral) is done. The expression can be interpreted as a probability of various "B's" (that collection of labels which define a specific answer). If that sum (or integral) is over all possibilities, the result is exactly one (the standard constraint on "probability").
saviourmachine said:
Does that mean anything? Or are they conventional mathematical tricks that always apply?
Not really. What I am doing is laying out a specific procedure for creating a functional relationship which can always be interpreted as a probability. What is important here is that no constraints of any kind have been placed on the underlying functional relationship (that unknown \vec{\Psi}). Again, if a method for obtaining those expectations exists, then a \vec{\Psi} which will yield them must exist. Actually, what I have just given you is not really a proof of that assertion; however, it is not difficult to construct a proof that the assertion is true. If you want the proof, let me know and I will lay it out for you in detail.
saviourmachine said:
Recapitulation. Taken into consideration the table C we talked about. G does map the B's in that table to another table with a same amount of entries, but with only two columns (the real and imaginary part). The dot product between G and \vec{G}^\dagger does lead us to another table with one column. This column is integrated n times, each time over one of his (n) elements.
I have a suspicion that you are a little confused. I am talking about two very different things here. The set
C and its members
B constitute what we want to explain. What I want to avoid doing is defining the elements of
B as I want those definitions to be the best possible in light of that explanation which I do not yet have. That is why I am working in the abstract. I want to use numerical labels for those elements because I have a lot of those labels and they don't necessarily carry any inherent meaning. Notice that any meaning attached to the elements of
B must be communicated via
C anyway so there exists no reason to preemptively assign any meanings. Assigning a meaning is tantamount to claiming you know what you are talking about. Until you think you understand the problem and have some kinds of expectations, definition is pretty much a waste of time.
On the other hand, mathematics is a fairly well defined language. I can lay out specific procedures for manipulating numbers with a very strong assurance that the reader will obtain exactly the same results from that manipulation which I do. If knowledge of
C (which is, by definition, a finite collection of
B's) provides us with the information necessary to specify our expectations for any specific
B then that knowledge will allow us to obtain those expectations from the labels which specify that
B. That is, a function exists which will yield that result. That function must be a member of "all possible functions" so it must be representable by that unknown expression we are referring to as \vec{\Psi}.
saviourmachine said:
Psi function
Doctordick said:
If follows that, if one defines the function \vec{\Psi} via
\vec{\Psi}(\vec{x},t) \equiv \frac{ \vec{G}(\vec{x},t) }{ { \sqrt{ \oint \vector{G}^\dagger \cdot \vector{G} dv} } }
we can "define" the probability of the B_j to be given by
<br />
P(\vec{x},t) = \vec{\Psi}^\dagger(\vec{x},t)\cdot\vec{\Psi}(\vec{x},t)dv<br />
where dv \equiv d^n x.
Ah, there we have our old familiar P again. I don't know how you did achieve that.
It is nothing more or less than exactly what I said above. \vec{\Psi} is a magnitude adjusted version of \vec{G}, our unknown function. The dot product changes that into a simple positive definite number and the division by the sum (or integral) over \vector{G}^\dagger \cdot \vector{G} guarantees that, when we sum (or integrate) our probability over all possibilities (that is, sum or integrate the numerator), we get exactly one. You have to take a square root because the factor come into the calculation of probability twice: once from the \vec{\Psi}(\vec{x},t) and a second time from the \vec{\Psi}^\dagger(\vec{x},t).
saviourmachine said:
What kind of value is the denominator?
The denominator is an unknown number. It cannot be known until we establish exactly what that unknown G is. Remember, the output of G is defined to be a list of numbers and the dot product is defined to be the sum of the members of that list multiplied by their complex conjugate (which guarantees the result will be a sum of positive numbers which is a number). Since G can be any function, problems could possibly arise with the fact that the resultant number could be zero or infinity, but these are easily argued away as not really causing any difficulties at all. Again, if you need to have that demonstrated, I will do so in detail.
saviourmachine said:
Rewriting the psi function
And this is quite difficult for me too. Is this matrix mechanics?
No, it is just simple calculus. I am merely asserting that the solutions I quote are completely equivalent to the relationships developed earlier in terms of the probability. I then prove that statement by substituting the dot product for the probability and work out the differential via the chain rule. In order to do that, I have to know what the differential of the complex conjugate is. That is why I wrote them down specifically. The definition of the complex conjugate is nothing more than the original expression where all appearances of the imaginary number i is replaced with -i. Since the solutions I am asserting are complex entities, I need to know what the complex conjugate of the expressions are. The issue here is that requiring the differential of the probability to be zero is equivalent to requiring the differential of \vec{\Psi} to be proportional to i times the original function. When the chain rule is expanded out, the added terms cancel out.
saviourmachine said:
It's difficult for me to follow this, but I hope that I lack only a few basic physical or mathematical concepts.
I suspect that the biggest problem is that you are unfamiliar with the expressions I am writing down and you think there is supposed to be more than the obvious: i.e., you don't understand where I am going so the steps don't seem to be meaningful. If you still have questions about anything I have put down, please let me know. If all this makes sense to you, I will establish the final two steps and then pull all the diverse threads together.
I hope I have not run you off – Dick