Can't there be no approximation?

In summary: But again, thats just limited data and without a theory that works with all data, we're left with approximations.
  • #1
cometzir
18
0
When we study something with our physics theory, we may always ignore some "unimportant" factors to simplify the culculation.And then, we get a approximation. But if we don't ingore any factors, we will get the absolutely accurate result. Is it possible?
I think the physics theory isn't suit the real word well. For example, The theory of classical mechanics say that by using the equation F=ma, we can "theoretically" describe all the mechanics phenomenon in the universe. But in fact, when we do it practically, we will find it is very very difficult, or actually impossible.(Even if there is no relativity effcet in the real world.) For example, we have talked a lot about horizontal cast movement, and the equation to describe it is very simple.But when we thrown a ball horizontally in the real world, things will be much more complex. The air, the geomagnetic field, the sun---all the thing in the universe will have a effect on it. So, strictly say, we can do nothing with our physics theory practically, is it?
So, can there be a theory that can accurately describe the real world? Since every thing will be influenced by all other things in the universe, can't we creat a theory that never study a phenomenon separately,but just study the whole universe? And if so, can there be no appriximation when we study the real world pratically with the new theory?
 
Last edited:
Physics news on Phys.org
  • #2
No simple answer...depends what you mean by "accurately"...But physics IS "practical".

Newtonian theory does a good job of approximating many phenomena allowing us to make useful calculations...even for space travel, for example...but at very high velocities we need relativity...and that isn't perfect either because for small scales where quantum mechanics reigns, we have discrepancies between the two theories.

And areas where both fail...black hole singularities and the big bang singularity...Right now we can't measure anything there...

Quantum theory also posits that every measurement disturbs the object of investigation...so from that perspective you'll always have an approximation.

You can check this out also via Heisenberg uncertainty principle...which appears to pose some unavoidable measurement problems...
http://en.wikipedia.org/wiki/Heisenberg_uncertainty_principle
 
  • #3
cometzir said:
So, can there be a theory that can accurately describe the real world? Since every thing will be influenced by all other things in the universe, can't we creat a theory that never study a phenomenon separately,but just study the whole universe? And if so, can there be no appriximation when we study the real world pratically with the new theory?
Of course not. The limitation isn't that of the theory, in this case. The limitation is on us. We can't know the positions and momentum of every particle in the universe. It's a matter of data, not theory.
 
  • #4
It IS a matter of theory. The Heisenberg Uncertainty Principle tells us that the product of uncertainties in measurement of momentum and position cannot be less than the Planck constant. And then there is the small matter of Chaos, which applies to many, seemingly simple, situations.
Life, I'm afraid, is ultimately fuzzy.
 
  • #5
Since every thing will be influenced by all other things in the universe

I did not understand you question here, but in order to measue the universe perfectly you'd have to remove yourself (the measuring device) from it and that can't be done...and even if your could, Heisenberg uncertainty is another final obstacle...
 
  • #6
Actually, the OP seems to reiterate many of the sort of ideas that were around at the end of the 19th century - they thought that Physics was all sewn up bar a few precise measurements. Then along came QM and SR and it all kicked off again.
 
  • #7
It's all rather subjective.

I can kiunda empathise with the OP, in the fact that if you wanted to, say, mathematically/physically describe something so simple as throwing a ball in the air, then it becomes incredibly complex, when all factors are considered, the tension in arm muscles, the angle of spin on the ball, the wind, etc. Without even then looking at each of those factors in more detailed terms (quantum uncertainty in air molecule positions? the synaptic signals from brain to arm muscle?)

So really, we break it all down depending on just what it is we need. FOr throwing a ball, we can neglect the spin on the ball, approximate gravity and so on, because its mostly negligible. Results always come with error margins and a report conclusion will identify the source of errors and results with statistics will take these into acount.

Physics is certainly practical, even with such concessions, engineers still build and design machines, buildings and such to reliable criteria, we don't need to understand quantum mechanics to make fire, we don't need to know relativitistic effects when planning a car journey.

Quantum Uncertainty aside, cosmologists do have an advantage of knowing a fraction of the universe at just after the PLanck time after the Big Bang, which gives a groundwork to start out from if they had ample tools to follow on, marriage of GR and QM would ease some of the pains in proceeding, but still there's issues with the nature or 'moment' of decoherence between the microscopic and macroscopic.

There are still hopefuls attempting to unify forces, find a Grand Universal Theory of Everything, but even if it were ever known, it would either be so incredibly abstracted from our experience and complex that it would be of no use, aside from perhaps predicting the entire life and interactions of a tiny, unimportant object. To deal with the numbers of eventuialities in macroscopic scales would be incalculable.
OR, it would be such a radically simplistic theory as to be equally unuseable.

I doubt we'll ever get close, but we're still making technological progress.
 
  • #8
cometzir said:
I think the physics theory isn't suit the real word well.

This is a ludicrous opinion. Where do you think the ability, knowledge, and understanding to make the computer/internet you used to spew this came from?
 
  • #9
@cometzir
"I think the physics theory isn't suit the real word well."
If you don't think Physics is doing a good job then I suggest you get to it and do a better job in your own way. Don't you realize how arrogant your attitude is to such a well established and researched subject? What have you got that could even start to replace it?
 
  • #10
I think most of the posters are missing the point on this thread. Personally I think it's a lack of understanding about experimental limitations. Let's assume nothing about quantum mechanics and general relativity and all of that. Let's assume we live in a Newtonian universe. Everything can be calculated exactly hypothetically. When you bring up the idea of throwing an object and how air resistance needs to be accounted for and the gravitational field of the Sun and all that, you have to realize something.

Assume you're making a cannon to fire a baseball. If the gravitational field of the Sun introduces a force of say, 0.0000000000000001N where the gravitational field of the Earth introduces a force of say, 1N, how big of a difference does the Sun really make? Let's say without taking into account the Sun, you expect your baseball to land exactly 10.384 meters away. Now, if you introduce the Sun, it might make the baseball land theoretically 10.38400000002m away or something ridiculous like that. So does it really matter? It's unimaginable that missing 0.00000000002m too far would make any difference in the real world.

In my opinion, that's the greatest power of approximations in physics. You cut out all the interactions that are simply so insignificant that they will never be detectable in experiment. Remember, physics use meaningless if you don't apply it to the real world!
 
  • #11
But there are many chaotic processes in the Universe. They don't even need to be particularly complex. Take a compound pendulum, for instance. In such processes, the outcome can vary wildly from infinitesimally small changes in input conditions. However accurately you specify the start, ANY change in one input variable can totally alter the outcome. So there is no chance of 'no approximation' calculation.
 
  • #12
Pengwuino said:
I think most of the posters are missing the point on this thread. Personally I think it's a lack of understanding about experimental limitations. Let's assume nothing about quantum mechanics and general relativity and all of that. Let's assume we live in a Newtonian universe. Everything can be calculated exactly hypothetically. When you bring up the idea of throwing an object and how air resistance needs to be accounted for and the gravitational field of the Sun and all that, you have to realize something.

Assume you're making a cannon to fire a baseball. If the gravitational field of the Sun introduces a force of say, 0.0000000000000001N where the gravitational field of the Earth introduces a force of say, 1N, how big of a difference does the Sun really make? Let's say without taking into account the Sun, you expect your baseball to land exactly 10.384 meters away. Now, if you introduce the Sun, it might make the baseball land theoretically 10.38400000002m away or something ridiculous like that. So does it really matter? It's unimaginable that missing 0.00000000002m too far would make any difference in the real world.

In my opinion, that's the greatest power of approximations in physics. You cut out all the interactions that are simply so insignificant that they will never be detectable in experiment. Remember, physics use meaningless if you don't apply it to the real world!

I think I agree with you, but I think you are making too strong a claim. *Any* measurement has error associated with it- and not just precision of the measuring devices, but variability in the object itself. For example, when I measure the power output of a laser, the output varies much more than the precision I can measure it with.

That's not a "limitation" of measurement, that's an essential feature of our universe.

Second, your specific example is important, but perhaps not for the reason you think- launching a spacecraft to land on an asteroid (or even Mars) requires exquisite knowledge of more things that I can list: the solar wind is sufficient to alter a trajectory with disastrous results, for example. Although we *do* have an exact theory that can predict the trajectory to an arbitrary degree of accuracy, we do not have sufficient knowledge of the empirical parameters that go into the theory to guarantee that the spacecraft will go precisely where predicted.

One power of theory is that is allows us to rationally order the relevant effects (i.e. a perturbation expansion), which then let's us design a control system to account for the residual unknowns. Of course, theory also allows us to speak rationally of the system in the first place- 'gravitational field' is a theoretical construct.
 
  • #13
I wish someone else would acknowledge that Chaos is a relevant factor in this thread. It has been recognised for many years and affects many astronomical systems. It's just as much an essential feature of our universe as random processes and 'the butterfly effect' is out there in our weather on many occasions.
 
  • #14
sophiecentaur said:
I wish someone else would acknowledge that Chaos is a relevant factor in this thread. It has been recognised for many years and affects many astronomical systems. It's just as much an essential feature of our universe as random processes and 'the butterfly effect' is out there in our weather on many occasions.

Can you elaborate on this?
 
  • #15
The OP is confusing uncertainty of measurement with the accuracy/correctness of current theory. They are not synonymous.

When one makes practical measurements, it is inevitably affected by a number of factors that aren't in the purview of the theory we are testing. Collectively, these factors contribute to the uncertainty of measurement. When physicists makes measurements, they invariably quote the uncertainty to express the degree to which these "unknown" (or unaccounted for) factors are important to the overall measurement.

That is, an uncertainty of 2m in the measurement of a football pitch is pretty significant. An uncertainty of 2m in the measurement of the distance between the Earth and the Moon is comparatively insignificant.

The difference between the measured value and expected value (according to theory) we call the discrepancy. If the discrepancy is less than the uncertainty, then the difference we observe can be attributed to outside factors that have not been accounted for.

i.e. If I observe a discrepancy of 0.01m in my measurement of a football pitch, and my equipment measures to an accuracy of 0.1m, then there is no dramas; I can attribute the discrepancy to the inaccuracy of my equipment (or alternatively, the failure of my equipment to account for extraneous factors in the measurement).

If, however, the discrepancy EXCEEDS the uncertainty, then (Houston) we have a problem. The extraneous factors cannot account for the difference between measurement and theory. Normally, this indicates a problem with the experiment, but occasionally, it indicates a flaw in theory. If the theory is at flaw, it (in time) is modified to account for fresh experimental evidence.

Claude.
 
  • #16
sophiecentaur said:
@cometzir
"I think the physics theory isn't suit the real word well."
If you don't think Physics is doing a good job then I suggest you get to it and do a better job in your own way. Don't you realize how arrogant your attitude is to such a well established and researched subject? What have you got that could even start to replace it?

I hope you were not deeply offended by this statement. We must understand that many people on this forum are not science students/researchers/teachers and therefore do not share the same passion.

Anyways, as many have already said, the reason why we incorporate mathematics and calculations into physics is because we we want to make predictions once we know the behaviour of the physical quantities involved, which is defined by the theory. These predictions are made for some purpose (usually planning the next course of action or deriving other conclusions about the subject of observation), and therefore I would only consider factors which are of immediate practical significance to my purpose.

Gravitational Time Dilation is significant enough that GPS satellites have to reconfigure their electronics or else they would all go haywire, reporting wrong locations. However you would not consider taking that into account when want to calculate the trajectory of a thrown ball. Though technically time dilation still holds, the effects are so small that it would make no practical difference to the point of landing.

In the former, my purpose would not be fulfilled if I do not consider the effect, the latter however will.
 
  • #17
sophiecentaur said:
I wish someone else would acknowledge that Chaos is a relevant factor in this thread. It has been recognised for many years and affects many astronomical systems. It's just as much an essential feature of our universe as random processes and 'the butterfly effect' is out there in our weather on many occasions.

If I understand you, chaos (at least the classical version) was quantified only because of the existence of exact solutions- for example, the Lyapunov exponent implicitly demands exact knowledge of two neighboring states in solution space. This then allows for a quantitative metric (the 'separation distance') which evolves over time.

Experiment is then said to 'coarse grain' the state space- in any case, chaotic solutions to dynamical systems require the existence of exact solutions.
 
  • #18
Claude Bile said:
<snip>
i.e. If I observe a discrepancy of 0.01m in my measurement of a football pitch, and my equipment measures to an accuracy of 0.1m, then there is no dramas; I can attribute the discrepancy to the inaccuracy of my equipment (or alternatively, the failure of my equipment to account for extraneous factors in the measurement).

<snip>

I think I follow you, but to pick a nit, you either left out the precision of the measurement, or confused 'precision' with 'accuracy'.

If the precision of the measurement is 0.001m, then the discrepancy is valid and can refer to uncertainty in the measurement device; if OTOH the precision of the measurement is 0.1m and the accuracy 0.001m, you cannot say that there is a measured discrepancy of 0.01m without repeated measurements, which introduce additional assumptions about the system under test.
 
  • #19
Andy Resnick said:
If I understand you, chaos (at least the classical version) was quantified only because of the existence of exact solutions- for example, the Lyapunov exponent implicitly demands exact knowledge of two neighboring states in solution space. This then allows for a quantitative metric (the 'separation distance') which evolves over time.
I don't know the history well enough to say when Brownian motion was first understood as a chaotic system involving probability, relative to other developments in the field, but it was quite early. Some guy called Einstein worked on Brownian motion, I think - I don't know if he did anything else interesting later on :rolleyes:

So if you accept that quantum mechanics and probability are fundamentally linked, the philosophical point about the existence exact solutions doesn't have much significance.
 
  • #20
AlephZero said:
I don't know the history well enough to say when Brownian motion was first understood as a chaotic system involving probability, relative to other developments in the field, but it was quite early. Some guy called Einstein worked on Brownian motion, I think - I don't know if he did anything else interesting later on :rolleyes:

So if you accept that quantum mechanics and probability are fundamentally linked, the philosophical point about the existence exact solutions doesn't have much significance.

Brownian motion is not chaotic- it's stochastic. Big difference, both in theory and in practice. Chaotic systems are still deterministic; stochastic systems are not- see, for example, the Langevin equation.

I'm not an expert in quantum chaos; last I read there were some major unresolved issues.
 
  • #21
Andy Resnick said:
I think I agree with you, but I think you are making too strong a claim. *Any* measurement has error associated with it- and not just precision of the measuring devices, but variability in the object itself. For example, when I measure the power output of a laser, the output varies much more than the precision I can measure it with.

I intentionally did that as I think it gets to the core of what the OP might be having issues with. If we were able to know the parameters of everything perfectly accurately, our laws would tell us exactly what will happen. However, for example using the Sun as a perturbation, some parts of your problem make such minute differences in what your theory tells you that if you then transition to the real world, you can safely ignore those differences and pretend the Sun isn't even there. To me, that's the beauty of approximations, you can safely ignore certain effects.

Going a step further, as you and others have done, is to show that if you take away exact knowledge of all your experimental parameters, you still have a powerful tool. If your uncertainties fall under a certain amount needed for your experiment, then your tool, physics, has achieved what it was meant to achieve.
 
  • #22
Naty1 said:
Quantum theory also posits that every measurement disturbs the object of investigation...so from that perspective you'll always have an approximation.
http://en.wikipedia.org/wiki/Heisenberg_uncertainty_principle
Yes, when we measure some thing, the measurement itself will have a effcet on the thing, and then, as the Heisenburg uncertainty says, we will never get a absolutely accurate result, But I think the measurement error is because we separate the thing we measure from its external environment, and the measuring device is a part of the external environment. Of course I can't do it but if we don't set a measuring object and its environment but to measure the whole universe, including the device itself, then, I think there won't be any approximation. I mean when I measure somthing, I say "I measure us" rather than "I measure it".:redface: Maybe it is ridiculous, but I think it may be a way to avoid approximation.

Pengwuino said:
Assume you're making a cannon to fire a baseball. If the gravitational field of the Sun introduces a force of say, 0.0000000000000001N where the gravitational field of the Earth introduces a force of say, 1N, how big of a difference does the Sun really make? Let's say without taking into account the Sun, you expect your baseball to land exactly 10.384 meters away. Now, if you introduce the Sun, it might make the baseball land theoretically 10.38400000002m away or something ridiculous like that. So does it really matter? It's unimaginable that missing 0.00000000002m too far would make any difference in the real world.
In some sitution, it may be true. But consider about the butterfly effect, the small change could have a great effect. When we study something, we neglect the negligible factors, maybe it doesn't matter, but if we use the approximate solution widely, the effect could be great, even lead us get a completely wrong conclusion. Perhaps we use different approximations when we study different things to reduse the effect, but what matters is that we can't exactly know what is negligible as Edward Lorenz does.
 
  • #23
We say we can't predict everything exactly, but what has happened is not uncertain. Since what will happen is certain, why can't we know it before it happens?
 
  • #24
cometzir said:
We say we can't predict everything exactly, but what has happened is not uncertain. Since what will happen is certain, why can't we know it before it happens?

We cannot know everything about everything in the past either. The uncertainty principle still holds true. We can see the results of something down to the limit of this, but no further.
 
  • #25
cometzir said:
We say we can't predict everything exactly, but what has happened is not uncertain. Since what will happen is certain, why can't we know it before it happens?

What has happened IS uncertain. i.e. we can't know it because we need to observe in order to know what it is or was. The very act of observing distorts the information and introduces uncertainty.
You need to throw some Victorian ideas out of the window in order to make progress here.
Millions of people have gone in circles in their brains, trying to reconcile classical ideas with modern Science. It can't be done.
 
  • #26
Andy Resnick said:
Brownian motion is not chaotic- it's stochastic. Big difference, both in theory and in practice. Chaotic systems are still deterministic; stochastic systems are not- see, for example, the Langevin equation.
I don't agree with that philosophically (though I disagree less "in practice".)

For eaxmple, take the classical "particles in a box" ideal gas model. If you have philosophical problems with "collisions between particles of zero size" in Newtonian mechanics, replace "collision" with a conservative central repulsive force acting at short range, so there are no literal collisions.

This model is deterministic in Newtonian mechanics, and it is intuitively obvious that the trajectory of each individual particle is chaotic.

However the chaos is not a very useful way of using the model. The useful properties are the emergent features, like the steady-state distribution of particle energies, "temperature", "pressure", "the diffusion equation", "Brownian motion", etc which are described by statistical quantities.

I'm not an expert in quantum chaos; last I read there were some major unresolved issues.

I'm not an expert in quantum anything, so let's leave that unresolved!
 
  • #27
AlephZero said:
I don't agree with that philosophically (though I disagree less "in practice".)

For eaxmple, take the classical "particles in a box" ideal gas model. If you have philosophical problems with "collisions between particles of zero size" in Newtonian mechanics, replace "collision" with a conservative central repulsive force acting at short range, so there are no literal collisions.

This model is deterministic in Newtonian mechanics, and it is intuitively obvious that the trajectory of each individual particle is chaotic.

However the chaos is not a very useful way of using the model. The useful properties are the emergent features, like the steady-state distribution of particle energies, "temperature", "pressure", "the diffusion equation", "Brownian motion", etc which are described by statistical quantities.

Perhaps we are using different definitions for "chaotic dynamics". I use the term in accordance with, for example, Guckenheimer and Holmes "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields". In that sense, a simple ideal gas model is not chaotic- the essential ingredient of any chaotic system is *nonlinearity*. Van der Pol's equation, Duffing's equation, and the Lorenz system of equations all have nonlinearity as an essential feature. Another important feature is bifurcations in the Poincare maps/phase portraits. Linear systems, AFAIK, do not result in chaotic dynamics.

Nonlinear perturbations can introduce stochastic behavior (stochastic layers, also called homoclinic tangles).
 
  • #28
I am not sure which side people are taking in this. If you introduce statistics then you immediately chuck out the notion of 'exact', which answers the OP.
Why should we lose any sleep over that?
 
  • #29
Andy Resnick said:
Perhaps we are using different definitions for "chaotic dynamics". I use the term in accordance with, for example, Guckenheimer and Holmes "Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields".
That could well be true. For example Lorenz's "pop sci" book The Essence of Chaos seems to have a wider defintion.
Quote from Amazon's blurb (with my emphasis):
Lorenz presents everyday examples of chaotic behaviour, such as the toss of a coin, the pinball's path, the fall of a leaf, and explains in elementary mathematical strms how their essentially chaotic nature can be understood. His principal example involved the construction of a model of a board sliding down a ski slope. Through this model Lorenz illustrates chaotic phenomena and the related concepts of bifurcation and strange attractors. He also provides the context in which chaos can be related to the similarly emergent fields of nonlinearity, complexity and fractals.

Van der Pol's equation, Duffing's equation, and the Lorenz system of equations all have nonlinearity as an essential feature. Another important feature is bifurcations in the Poincare maps/phase portraits.
I certainly buy those as examples as chaotic systems. But being an engineer (notwithstanding a math degree), trying to draw a boundary between linear and nonlinear systems in the real world (as opposed to linear and nonlinear models of systems) is a pretty thankless task!
 
Last edited:
  • #30
AlephZero said:
trying to draw a boundary between linear and nonlinear systems in the real world (as opposed to linear and nonlinear models of systems) is a pretty thankless task!

True, that.
 

1. Can we ever achieve absolute precision in scientific measurements?

No, it is impossible to achieve absolute precision in scientific measurements. All measurements involve some level of uncertainty and error, whether it is due to limitations in technology or human error.

2. Why do scientists use approximations in their calculations?

Scientists use approximations in their calculations because it is often impractical or impossible to obtain exact values for all variables. Approximations allow for easier and more efficient calculations while still providing meaningful results.

3. Are there any situations where approximations are not acceptable in scientific research?

Yes, there are certain situations where approximations are not acceptable in scientific research. For example, in fields such as medicine and engineering, precise measurements and calculations are crucial for ensuring safety and accuracy.

4. How do scientists determine the level of accuracy in their measurements and calculations?

Scientists determine the level of accuracy in their measurements and calculations by using statistical methods and error analysis. They compare their results to known values or repeat experiments multiple times to minimize error.

5. Can we improve upon existing approximations in science?

Yes, scientists are constantly working to improve upon existing approximations in science. With advancements in technology and new discoveries, more precise calculations and measurements can be made, leading to better approximations and more accurate results.

Similar threads

  • Other Physics Topics
Replies
8
Views
1K
  • Other Physics Topics
Replies
4
Views
2K
  • Other Physics Topics
Replies
9
Views
1K
  • Other Physics Topics
Replies
16
Views
11K
  • Other Physics Topics
Replies
11
Views
2K
  • Other Physics Topics
Replies
22
Views
3K
  • General Discussion
Replies
2
Views
325
  • Art, Music, History, and Linguistics
Replies
4
Views
975
Replies
69
Views
10K
  • Other Physics Topics
Replies
4
Views
1K
Back
Top