Is 'Scale' the Key to Unifying Quantum Mechanics and Classical Physics?

In summary, the conversation discusses the conciliation of quantum mechanics and classical physics through the introduction of a new spatial dimension called 'scale'. The value of this dimension ranges from approaching '0' on the microscopic end to approaching 'oo' on the macroscopic end. Theoretical physicist 'Sputnik' proposes that by consistently factoring the 'scale' dimension into calculations for both quantum and classical systems, a unified framework can be achieved. This would result in a probabilistic correlation between the applicability of quantum and classical principles and the value of the 'scale' dimension. The idea bears a resemblance to perturbative analysis, where a small parameter in equations can be approximated by taking the first few terms in a Taylor expansion. This
  • #1
Steven Taylor
[SOLVED] QM and Classical Mechanics

It's been a long time since I've posted here (I used to post on the forums under the name 'Sputnik'), so I don't know if anyone who remembers me is still around, but if so, hello! In any case, I recently made what seemed like a dramatic intuitive leap concerning the conciliation of quantum mechanics and classical physics, and I'd like some help further developing my ideas.

Before I go any further, I want to emphasize that I don't have any formal training in mathematics or science, so please don't be too judgmental if any of the following arguments involve gross oversimplifications or simple inaccuracies (but at the same time, please do feel free to correct such errors). My background is in philosophy, logic and rhetoric, although I like to think I have a fairly good grasp of many higher mathematical concepts, if not their applications.

All that having been said, If I were a mathematician or theoretical physicist, here's the approach I might take to fitting quantum mechanics and classical physics together within a single, coherent theoretical framework.

First, I would introduce a new spatial dimension into my considerations: It would be called 'scale' and its value would range from approaching '0' as a limit on the microscopic end of the continuum to approaching 'oo' as a limit on the macroscopic end. Then I would consistently factor the dimensional variable scale into all of my calculations, whether involving quantum level systems or classical level systems. (Mathematically, I would take the value of the metric of 'scale' to be constant, although the relationship between successive values for 'scale' could perhaps be expressed rationally.)

An experimental method for determining the physical meaning (if any) of such a constant might involve comparing the predictive power of mathematical models across different levels of scale to detect a statistical pattern. If the theoretical framework I'm proposing holds, the following results would obtain: At lower-levels of scale (as values for scale approach '0'), quantum uncertainty could be expected to increase proportionally. Meanwhile, at higher-levels of scale (as values for scale approach 'oo'), the predictive power of classical models could be expected to increase proportionally.

At the quantum level, the Universe appears chaotic--even mathematically unstable. But moving up levels of scale, into the domain of classical physics, we find our mathematical models becoming increasingly accurate. It's as if, as one moves up through levels of scale, one begins to see a disorderly system coalescing into an increasingly orderly one until finally, viewed as a whole, the Universe approaches a kind of mathematical perfection. On the other hand, when viewed with increasing attention to its particulars, the Universe appears to fall steadily into disarray.

In the theoretical framework I'm proposing, there would be a probabilistic correlation between the applicability of quantum mechanical principles/classical physical principles and the value of the 'scale' metric for the system being modeled. At one end of the continuum, there would be complete quantum uncertainty; at the other, clockwork precision--only now, both could be seen as unified within a single, seamless whole, along the dimensional continuum of scale.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
I think classical mechanics begins to lose touch with reality on a really large scale as well. That's why general relativity is there, right? So, I think your approach to classical mechanics becoming 'infinitely' correct the larger the scale got might be a little off.

But maybe I'm just misinterpreting you.

Oh. And I think the scale idea could potentially provide a single math framework for predicting experimental results at all extremes, but it wouldn't really provide much insight as to what is going in. It sounds to me to be much the same as using one theory or the other on a problem except that becomes more hidden by the fact that the 'scale' factor chooses which theory to use by making one more dominant.
 
Last edited by a moderator:
  • #3
Your idea bears a vague resemblance to perturbative analysis.


The basic idea is that of a taylor expansion. For example, suppose you wanted to compute values of:

f(x) = 1 / sqrt(1 - x)


If you were to compute the Taylor expansion of this function around zero, you get the infintie series:

f(x) = 1 + x / 2 + 3 x2 / 8 + 5 x3 / 16 + ...


Suppose x is really small... say 10-9. Plugging this into my calculator gives:

f(10-9) = 1.0000000005...

For most intents and purposes, this is simply equal to 1... so the first term in our infinite sum is good enough to approximate f(x).


Suppose x isn't quite that small... say 10-6. Then:

f(10-6) = 1.000000500...

If we take the first two terms of the infinite sum, we get:

1 + x / 2 = 1.0000005

So the first 2 terms are sufficient to approximate f(x)

Suppose x is a little larger, like 10-3. Then

f(10-3) = 1.00050037531...

If we take the first three terms in our series, then:

1 + x / 2 + 3 x2 / 8 = 1.000500375

So for x in this range, the first three terms are a sufficient approximation.



This is enough to demonstrate the idea of a perturbative expansion. If we have a parameter in our equations that is really small, then if we make taylor series expansions of our formulae, we just need the first couple terms for most purposes. One real example of this is kinetic energy.

In Special Relativity, the equation for kinetic energy is

K = (1 / sqrt(1 - (v/c)2) - 1) m c2

If we taylor expand this, we get:

K = (m v2) / 2 + (3 m v4) / (8 c2) + ...

You might recognize the first term in this expansion:

K = (1/2) m v2

is the Newtonian formula for kinetic energy!

When c is much larger than the other quantities in the equation, only the first term matters; i.e. when velocity is small, the special relativistic formula is essentially identical to the Newtonian formula!


This type of behavior is actually a natural requirement on any physical theory. For ordinary systems, Newtonian mechanics has given us accorate predictions for a long time... so it only stands to reason that for the types of problems for which we've used Newtonian mechanics that newer theories must give the same answers as Newtonian mechanics. For small velocities, SR reduces to Newton. For large distances and low energies, Quantum Mechanics reduces to Newton.

It's not even just to Newton; for slow moving charge distributions, Maxwellian electromagnetics reduces to electrostatics. In nearly flat space-times, General Relativity reduces to Special Relativity and/or Newtonian gravitation.


Hurkyl
 
  • #4
I think classical mechanics begins to lose touch with reality on a really large scale as well. That's why general relativity is there, right? So, I think your approach to classical mechanics becoming 'infinitely' correct the larger the scale got might be a little off.

But maybe I'm just misinterpreting you.

Oh. And I think the scale idea could potentially provide a single math framework for predicting experimental results at all extremes, but it wouldn't really provide much insight as to what is going in. It sounds to me to be much the same as using one theory or the other on a problem except that becomes more hidden by the fact that the 'scale' factor chooses which theory to use by making one more dominant.

Thanks for the feedback. Your first point may in fact involve a slight misinterpretation of the original idea, but you still raise some excellent points. I should clarify by explaining that the idea isn't so much of classical mechanics becoming 'infinitely' correct as it is of the Universe's physical systems becoming increasingly predictable at higher levels of scale, as the dimensional variable of scale approaches infinity. Extending the idea a little further, viewed on the highest conceivable level of scale, the Universe is simply an undifferentiated whole, and you could just as easily replace all mathematical descriptions with a single variable, 'x,' whose value you could simply define as the Universe in its entirety (whatever physical meaning that might have). Obviously, there wouldn't be much practical value in such a model, but by definition, it would be accurate and complete, and the behavior of the system would be utterly predictable. Your second point really gets more to what I had in mind, though, as far as any practical implications...
 
Last edited by a moderator:
  • #5
Like you I have no formal training, but on my home site (accesss through 'members list button') I have shown that the fractional sequence found in the Fractional Quantum Hall Experiment (by the 1998 Nobel Prize winners)can also be found in the dust bands around comet Hale-Bopp, the distances between planets and between the arms of a theoretically ideal spiral galaxy.
This links the structure of electrons with the structure of large bodies.
If you look at the Periodic Table forum you will see a proposal by riduncan for atomic structure and my comments on its relationship with particle nuclei. This links sub-atomic particles and atoms via Pascals Triangle.
It seems that Pascals Triangle describes fields (particles and atoms) and the Fraction Formula describes waves and wave related phenononym. These can be linked together through mass (i.e. density) via The Quantum Hall Experiment.
Lets us all keep pursueing this concept.
 
  • #6
Again, thanks for all the excellent feedback. Hurkyl, I'm fascinated by your post on perturbative analysis... Isn't there also a concept called normalization, in which equations are, to put it crudely, simplified from the quantum mechanical level to the classical mechanical level?

I'd like to discuss some of these ideas at greater length later. But first, let me elaborate a little on what I mean by 'scale.'

I conceive of ‘scale’ (actually, a more descriptive term might be ‘depth’) as a spatial dimension with real physical meaning, a physical meaning implied by observed differences in the fundamental nature of physical law across higher and lower-level physical systems. Various competing models and theories of the Universe could be reconciled with the recognition that physical laws in fact do operate differently across different levels of scale--not because the laws are inconsistent with one another, but because a single physical system being observed at different levels of scale could quite naturally exhibit increasingly greater complexity at lower levels (after all, it only stands to reason that when an observer takes an increasingly complex set of factors into their purview, that observer can expect to see an increasingly complex set of interactions). Applying the principle of Occam’s Razor, the simplest explanation for the fact that the many competing theories of the physical world can’t be reconciled is that they simply can’t be reconciled: in a given case, one set of rules applies, in another case, a different set applies, but in every case, the rules in effect on a lower level of scale must be reducible to simpler terms that conform with the rules in effect on a higher level, which is consistent both with experimental results and current theory (unless I am unaware of some development to the contrary).
 
Last edited by a moderator:
  • #7
Briefly, the rest of my theory goes like this: The relationship of depth to complexity (if such a correlation has real physical meaning) could be rational. The dimension of depth could be curved, so to speak. Picture it as an infinite set of interlocking, nested spheres--or better yet, layers, like an onion skin. And as you approach the center of the onion, the layers of onion skin grow thinner, but become proportionally more dense; and as you approach the outer perimeter, the layers grow thicker but are proportionally less dense.

Each unit of depth would correspond to one of the nested shells and the physical systems within each shell would operate according to a particular set of mathematical, logical, or otherwise binary relations. Perhaps at certain levels--that is, within a given shell of scale--only probabilistic or non-linear (chaotic) mathematical models could accurately replicate the behavior of the local physical systems.

The ability of mathematical models to accurately predict the behavior of physical systems at different levels of spatial depth (scale) would correlate to the physical systems' position, relative to an observer. The further removed in depth from the observer, the less predictable the behavior of the physical system would seem, due to information loss and observation sensitivity per Heisenberg. A sort of information entropy has to be factored in, either way, due to the difficulties of observing physical systems at varying 'depths' or 'levels of scale' in the spatial continuum.
 
  • #8
What you're talking about is the idea of "Renormalisation" (not normalisation :) ). It is very important in predicting phase transitions in solids, liquids and gasses. It goes a bit like this:
We know from thermodynamics that certain relations exist between entropy enrgy temperature etc. etc. We also know that thermodynamics is consistent with classical statistical theory. So what we want to do, is to setup a full microscopic quantumtheory which gives the classical dynamics at the macroscopic level. To do this you need the idea of renormalization. Using this technique you can extract the classical behaviour from microscopic quantum behaviour. The inventor of this idea was K.G Wilson. It is pretty complex!
 
  • #9
Originally posted by heumpje
What you're talking about is the idea of "Renormalisation" (not normalisation :) ). It is very important in predicting phase transitions in solids, liquids and gasses. It goes a bit like this:
We know from thermodynamics that certain relations exist between entropy enrgy temperature etc. etc. We also know that thermodynamics is consistent with classical statistical theory. So what we want to do, is to setup a full microscopic quantumtheory which gives the classical dynamics at the macroscopic level. To do this you need the idea of renormalization. Using this technique you can extract the classical behaviour from microscopic quantum behaviour. The inventor of this idea was K.G Wilson. It is pretty complex!

Doh! That's right, I meant renormalization (or is the British spelling 'renormalisation' preferred?)...

Really, the only point I'm trying to make is that the idea of 'scale' already seems to figure implicitly into most contemporary views of how physical laws operate, but I think the assumption tends to be that ideas about 'levels of scale' are purely theoretical. I'd like to suggest that the concept of 'scale' might be one with real, physical meaning--i.e., that when we speak of movement 'up' and 'down' through levels of scale we may be doing more than simply making an analogy to movement through space.
 
  • #10
The predictability of a statistical system increases with the number of identical components.

eg.

1 coin toss is unpredictable

2 coin tosses and you are twice as likely to see 1 head and 1 tail than any other outcome.

1,000,000 coin tosses and you can predict between 40% and 60% of them will be heads with confidence.

The same principles apply with subatomic particles. You would expect a system of billions of particles made entirely from 3 types (protons, neutrons and electrons) to be much more predictable than one particle on its own.

I have never seen any mystery as to how quantum mechanics scales into classical mechanics
 
  • #11
I have never seen any mystery as to how quantum mechanics scales into classical mechanics

And I don't mean to suggest there is a mystery--exactly the opposite.

My point simply has to do with how we think about 'scale,' and how physical laws that seem fundamentally incompatible when viewed without regard to differences in complexity across levels of scale can be viewed as entirely compatible when 'scale' is taken not merely to be an arbitrarily-imposed organizational concept, but adopted as a physically meaningful feature of space--a spatial dimension, in fact.

Now to back-track for a moment:

The same principles apply with subatomic particles. You would expect a system of billions of particles made entirely from 3 types (protons, neutrons and electrons) to be much more predictable than one particle on its own.

...Which would be true if protons, neutrons and electrons were the only particles contemplated by QM, and if each of these particles behaved with absolute predictability. But that isn't how it works, as I understand it...
 
  • #12
I think your'll find that ordinary matter can be understood very well in terms of protons, neutrons and electrons and they don't need to behave with any predictability individually to become predictable as a whole.

Don't forget that as well as statistical patterns becoming more reliable with more particles, the particles also interact such that they sit in potential wells.

Perhaps this is actually your 'scale' variable you are looking for?
 

What is the difference between quantum mechanics and classical mechanics?

Quantum mechanics and classical mechanics are two different theories used to describe the behavior of matter and energy. Classical mechanics is a theory that explains the motion of macroscopic objects, such as planets and cars, while quantum mechanics is a theory that explains the behavior of subatomic particles, such as electrons and photons. The main difference between the two is that classical mechanics is based on deterministic principles, where the exact position and velocity of a particle can be predicted, while quantum mechanics is based on probabilistic principles, where the exact position and velocity of a particle cannot be known at the same time.

How does quantum mechanics explain the behavior of particles?

Quantum mechanics explains the behavior of particles by using mathematical equations, such as the Schrödinger equation, to describe their wave-like properties. These wave-like properties include concepts such as superposition, where a particle can exist in multiple states at the same time, and entanglement, where particles can become connected and influence each other's behavior even at great distances. These concepts are not present in classical mechanics and are essential for understanding the behavior of subatomic particles.

Why is quantum mechanics necessary if classical mechanics can explain macroscopic objects?

Quantum mechanics is necessary because it provides a more accurate and complete description of the behavior of matter and energy at a microscopic level. Classical mechanics breaks down when applied to extremely small particles, such as electrons, and cannot fully explain their behavior. Quantum mechanics, on the other hand, has been proven to accurately describe the behavior of these particles, making it essential for understanding the fundamental laws of nature.

How do quantum mechanics and classical mechanics relate to each other?

Quantum mechanics and classical mechanics are both valid theories that describe the behavior of matter and energy, but they are not directly related. Classical mechanics can be thought of as a special case of quantum mechanics, where the particle's wave-like properties are negligible and the laws of classical mechanics can be applied. However, in situations where the wave-like properties of particles are significant, such as at the subatomic level, classical mechanics is no longer applicable, and quantum mechanics must be used.

What are some practical applications of quantum mechanics?

Quantum mechanics has numerous practical applications in modern technology. For example, it is used in the development of transistors, lasers, and computer memory. Quantum mechanics is also essential for understanding the behavior of materials and chemicals, making it crucial in fields such as chemistry and material science. Furthermore, quantum mechanics plays a vital role in technological advancements in communication, cryptography, and energy production.

Similar threads

Replies
11
Views
3K
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
46
Views
4K
  • Science and Math Textbooks
Replies
14
Views
2K
Replies
6
Views
769
  • Quantum Physics
Replies
21
Views
1K
Replies
4
Views
802
Replies
5
Views
6K
  • Classical Physics
Replies
3
Views
536
Back
Top