martinbn said:
@Auto-Didact I know you that you've already written a long post, but can you be a bit more specific. You are using a lot of phrases that I personally find hard to guess what they mean.
Good questions, I will try to answer each of them in a manner understandable to as wide an audience as possible; this means that I will use the most elementary notations that everyone who has taken calculus should be able to recognize.
martinbn said:
For example what is a canonical form based on symplectic geometric formulation
This is
the key question, so I will spend most time on this one: stated simply in words first, it means that one can do geometry and calculus in an extended phase space, which topologically is a very special kind of manifold. I will illustrate this by utilizing analytical mechanics with phase space trajectory ##\Gamma(t)##, ##H(q,p,t)## and ##\delta S[\Gamma] = \delta \int_{t_0}^{t_1} L(q,\dot q, t) dt = 0## as a case study:
Let $$H=p\dot q- L \Leftrightarrow L=p\dot q- H$$It then immediately follows that $$\begin{align}
\delta \int_{t_0}^{t_1} L dt & = \delta \int_{t_0}^{t_1} (p \frac {dq}{dt}-H) dt \nonumber \\
& = \delta \int_{t_0}^{t_1} (p \frac {dq}{dt}-H\frac {dt}{dt}) dt \nonumber \\
& = \delta \int_{t_0}^{t_1} (p \frac {dq}{dt}+0\frac {dp}{dt}-H\frac {dt}{dt}) dt \nonumber \\
\end{align}$$where obviously ##L = p \frac {dq}{dt}+0\frac {dp}{dt}-H\frac {dt}{dt}##.
Now group the terms on the RHS of ##L## as two vectors, namely: $$\vec X = (p, 0, -H) \text { & } \vec Y = (\frac {dq}{dt},\frac {dp}{dt},\frac {dt}{dt})$$It then immediately follows that $$L = \vec X \cdot \vec Y$$Now we can continue our earlier train of thought: $$\begin{align}
\delta \int_{t_0}^{t_1} L dt & = \delta \int_{t_0}^{t_1} (p \frac {dq}{dt}+0\frac {dp}{dt}-H\frac {dt}{dt}) dt \nonumber \\
& = \delta \int_{t_0}^{t_1} \vec X \cdot \vec Y dt \nonumber \\
& = \delta \int_{\Gamma} \vec X \cdot d \vec {\Gamma} = 0 \nonumber \\
\end{align}$$ where the last equation is a line integral along the phase space trajectory ##\Gamma##.
Now one may say I just did a bit of algebra and rewrote things and yes, that actually is trivially true. However the more important question naturally arises:
are the vector fields ##\vec X## and ##\vec Y## simply mathematics or are they physics? The answer: they are physics, more specifically they are properties of analytical mechanics in phase space, with ##\vec Y## being the displacement vector field in phase space.
More specifically, what is ##\vec X##? Remember that these are vectors in a 3 dimensional space ##(q,p,t)##. So let's just do some vector calculus on it, specifically, take the curl of ##-\vec X##: $$ \nabla \times (-\vec X) =
\begin{vmatrix}
\hat {\mathbf q} & \hat {\mathbf p} & \hat {\mathbf t} \\
\frac {\partial}{\partial q} & \frac {\partial}{\partial p} & \frac {\partial}{\partial t} \\
-\vec X_q & -\vec X_p & -\vec X_t
\end{vmatrix} = (\frac {\partial H}{\partial t},- \frac {\partial H}{\partial t}, 1)$$ Now if you haven't seen the miracle occur yet, squint your eyes and look at the last part ##\nabla \times (-\vec X) = (\frac {\partial H}{\partial t},- \frac {\partial H}{\partial t}, 1)##. More explicitly, recall ##\vec Y = (\frac {dq}{dt},\frac {dp}{dt},\frac {dt}{dt})##. It then immediately follows that $$(\frac {dq}{dt},\frac {dp}{dt},\frac {dt}{dt}) = (\frac {\partial H}{\partial t},- \frac {\partial H}{\partial t}, 1)$$ or more explicitly that ##\vec Y = - \nabla \times (\vec X)##.
These are Hamilton's equations! In other words, ##- \vec X## is the vector potential of ##\vec Y##. There is even a gauge choice here making ##X_q = p## and ##X_p = 0##, but I will not go into that.
Now Hamilton's principle - i.e. that ##\delta \int_{t_0}^{t_1} Ldt = 0## - follows naturally from Stokes' theorem: $$ \begin{align} \delta \int_{\Gamma} \vec X \cdot d \vec {\Gamma} & = \int_{\Gamma} \vec X \cdot d \vec {\Gamma} - \int_{\Gamma} \vec X \cdot d \vec {\Gamma'} \nonumber \\
& = \oint_{\Sigma} \nabla \times \vec X \cdot d \vec {\Sigma} \nonumber \\
\end{align}$$ where ##\Sigma## is a closed surface between the two trajectories ##\Gamma## and ##\Gamma'##. From vector calculus we know that it is necessary that ##\oint_{\Sigma} \nabla \times \vec X \cdot d \vec {\Sigma} = 0## because ##\vec Y = - \nabla \times \vec X## and this means that ##\nabla \times \vec X## is tangent to ##\Sigma## verifying our proof that the integral vanishes.
In other words, Hamilton's principle is just an implicit consequence of ##\vec Y = - \nabla \times \vec X##, i.e. of Hamilton's equations in phase space. If ##\vec Y## has a vector potential ##- \vec X##, then ##\vec Y## is automatically a solenoidal vector field, i.e. $$\vec Y = - \nabla \times \vec X \Leftrightarrow \nabla \cdot (\nabla \times \vec X) = 0 \Leftrightarrow \nabla \cdot \vec Y = 0$$demonstrating that variational principles w.r.t. action -
indeed even the very existence of calculus of variations as a mathematical theory - is purely a side effect of Liouville's theorem applying to Hamiltonian evolution in phase space; that in a nutshell, is symplectic geometry in it's most simple formulation.
Lastly, if you remove the requirement of Liouville's theorem, you leave the domain of Hamiltonian mechanics and automatically arrive at nonlinear dynamical systems theory, practically in its full glory; this is theoretical mechanics at its very finest.
martinbn said:
say what is that for the heat or the Laplace equations?
As DEs, the heat equation ##\nabla^2 u = \alpha \frac {\partial u} {\partial t}##, the Laplace equation ##\nabla^2 u = 0## and the Poisson equation ##\nabla^2 u = w## are all elliptical PDEs, meaning all influences are instantaneous (cf. action at a distance in Newtonian gravity). Carrying out the phase space analysis goes a bit too far for now.
However it is immediately clear upon inspection of the equations that the heat equation is a more general Poisson equation, which is itself a more general Laplace equation; as stated above this is what is meant by saying the one is an implicit form of a more explicit form. More, generally all of them are all special instances of the more general Helmholtz equation which has as its most explicit form $$\nabla^2 u + k^2 u = w$$It is the set of all solutions of an implicit DE, i.e. the set ##U## containing all possible functions ##u##, which decides what the explicit form of the DE is. Unfortunately, this set is typically for quite obvious reasons unknown.
These relationships between DEs becomes even more obvious once one applies these same mathematical techniques to the study of sciences other than physics, where these differential equations naturally tend to reappear in their more explicit forms. If one steps back - instead of mindlessly trying to solve the equation - an entire taxonomy with families of differential equations slowly becomes apparent. Actually we don't even have to leave physics for this, since the Navier-Stokes equation from hydrodynamics and geometrodynamic equations tend to rear their heads in lots of places in physics.
martinbn said:
What is the problem with having axioms such as the Born rule?
The answer should become obvious once rephrased in the following manner: What is the problem with having the Born rule - a distinctly non-holomorphic statement - for needing to be able to understand what an analytic differential equation is describing?
martinbn said:
What is an implicit form of a differential equation and why is the Schrodinger equation implicit?
An implicit form is as I stated above: the Laplace equation is an implicit form of the Poisson equation with ##w = 0## implicitly.
martinbn said:
Generally what makes the Schrodinger equation so different from any other to say that the theory has a problem?
There is nothing special about the Schrodinger equation, that is my point. What has a problem is orthodox QM, which consists of a mishmash of SE (DE) + Born rule (ad hoc, non-analytic)+ measurement problem + etc. No other canonical physical theory has the mathematical structure where all consequences of the theory aren't directly derivable from the DE and the mathematics (i.e. analysis, vector calculus, differential geometry, etc).
martinbn said:
What does it mean for a differential equation to be incomplete/complete?
It means that stated in it's implicit form there are terms missing, i.e. implicitly made to be equal to zero and therefore seemingly not present, while when rewritten into the most explicit form the terms suddenly appear as out of thin air: the terms were there all along, they were just hidden through simplification by having written the equation in its implicit form.
martinbn said:
And what does it mean to complete it?
It means to identify the missing terms which are implicitly made to equal zero; this is done purely mathematically through trial and error algebraic reformulation, by discovering the explicit form of the equation. There is no straightforward routine way of doing it, it cannot be done by pure deduction; it is instead an art form, just like knowing to be able to handle nonlinear differential equations.
Completing an equation is a similar but not identical methodology to
extending an equation; extending is often mentally more taxing since it tends to involve completely rethinking what 'known' operators actually are conceptually, i.e. just knowing basic algebra alone isn't sufficient. Extending is how Dirac was able to derive his equation purely by guesswork; he wasn't just blindly guessing, he was instead carefully intuiting the underlying hidden structure Lorentzian structure in the d'Alembertian's implicit form and then boldly marching forward using nothing but analysis and algebra.
martinbn said:
Say why is the Schrodinger equation incomplete and why are the equations from classical physics complete?
The Schrodinger equation once completed in the manner described above actually has the Madelung form with an extra term, namely the quantum potential. This is purely an effect of studying the equation as an object in the theory of differential equations and writing it in the most general form without simplifying. I'll give a derivation some other time, if deemed necessary.