# Equations: Model or Reality?

## Are some physics equations (see the post below) a model or the reality itself?

• ### No, no matter how well you integrate it, it would never be 100% corresponded with a real event

• Total voters
33
Gold Member
During my undergraduate studies, I have been taught a lot about some important mathematical models for physics and engineering. The most important and heavy equations I have been shown have been:

1) Navier Stokes Equations for Fluid Flow (Reactive and NonReactive).

2) Structural Mechanics (Navier) Equations.

3) Maxwell Equations.

5) Analytical Mechanics (Lagrange & Hamilton) Equations.

All of them are partial differential equations and integral equations. I have felt the heaviness of these models and how difficult is to integrate them when one tries to reflect poorly how is the real event.

I think these equations are more or less "magic". They are not as simple mathematical models. In particular I am going to try to develop here an imaginary numerical experiment with Navier Stokes Equations (with which I am more familiar).

Imagine I want to simulate numerically the free turbulent jet in water (see the first picture attached). You know that there are an special set of equations for turbulent flow (the Reynolds Averaged Navier Stokes Equations RANS). I won't want to use this set, because they introduce an approach about Reynolds averaged values. I want to use the complete Navier Stokes equations. This technique is the so-called Direct Numerical Simulation DNS. Well, in order to compute N-S equations I would need a GREAT number of computational points N, with N being of order $$Re_L^{9/4}$$ being $$Re\sim 10^7$$ the Reynolds Number for a typical turbulent flow. Well, the time of computation will be enormous too.

But there is another aspect we have to pay attention. If I could compute this problem with DNS technique and with a "traditional" set of boundary conditions, I would not obtain a real turbulent jet, understanding that a complete correspondence with reality won't occur. In order to enhance the correct onset of turbulence, one should simulate the boundary conditions which make this flow to be turbulent: microscopic rugosity in walls and its interaction with boundary layer, unsteady effects, and flow self instability transported from the pipeline upstream.

I mean this: imagine I have the N-S equations for simulating this flow. I have also a set of $$N_b$$ boundary conditions, which reflects the most accurately possible the real boundary conditions of the flow (called $$N_{b(real)}$$, and also I have available a time $$t$$ of computation. The MAIN question of this thread is:

"If $$N_b\rightarrow N_{b(real)$$ allowing $$t\rightarrow$$ to be extraordinary larger (which means we must have a great power of computation), then would the numerical solution be the same than the real flow solution?? "

The answer to this question would have non-trivial consequences. I personally think the answer would be "yes it would be". But it would mean that N-S equations are not a model of reality, but they would tend asymptotically to reality.

What do you think?

#### Attachments

Last edited:

FredGarvin
Clausius said:
The answer to this question would have non-trivial consequences. I personally think the answer would be "yes it would be". But it would mean that N-S equations are not a model of reality, but they would tend asymptotically to reality.
I agree completely. The reality is that the few places that can afford, not only monitarily, but time wise to do calculations of this magnitude are very few. That to me means that even if we did posess the ability to do so, the "real world" will always have to rely on experimentation for detailed, application specific data.

Gold Member
FredGarvin said:
I agree completely. The reality is that the few places that can afford, not only monitarily, but time wise to do calculations of this magnitude are very few. That to me means that even if we did posess the ability to do so, the "real world" will always have to rely on experimentation for detailed, application specific data.

Well, if you agree with what I wrote, then you had to vote yes.

If $$u_r$$ represents the REAL solution (with all detail) and $$u_n$$ represents the NUMERIC solution, t is the time available for computatiom and N is the number of computational points, then:

$$u_n\rightarrow u_r$$ as $$t,N\rightarrow \infty$$

Do you agree with this last sentence?

If so, we can conclude N-S equations are NOT a model of reality, they can potentially represent completely the reality under the limits made, so just at the bottom of N-S is just the reality. The problem is we haven't got the power to check it. This conclusion is different from thinking that N-S equations are merely a model of the real world.

Some guy could argue that there are some microscopic effects not considered in N-S equations. Well, maybe those effects have a 0.0001% of importance in this stuff and for this flow, so we will neglect them. On the other hands, the effects which produce the onset of turbulence are not negligible at all, and are the main field of research of DNS engineers. Usual turbulent flow calculation has been based on RANS equations, so there wasn't a good result. If we could couple N-S equations with those essential effects which by the way are present in a laboratory hydrodinamic tunnel, then would be the calculation totally completed? Would it reflect exactly the reality (with exception of those microscopic effects (for instance Brownian Motion).?

Astronuc
Staff Emeritus
No, no matter how well you integrate it, it would never be 100% corresponded with a real event!

Close, but not quite. Our models are approximations of reality, and it is amazing how close we can get.

Models have arbitrary accuracy and precision. Some older models were developed in the days of limited computing resources, both in terms of processing capability and memory.

Modern day models and methods may be more precise, but still do not get the micro-fine or nano-fine or even pico-fine detail. BUT, it is not necessary to achieve absolute fidelity to reality. If one can achieve a temperature with 10°C, then that is pretty good.

The best models may obtain on the order of 1-3% of critical values, and then one employs some statistical or uncertainty analysis to provide 'margin' to critical values.

Predictive modeling is only as good as the data available. These data are the basic material and mechanical properties - thermophysical, thermo-mechanical, thermal-hydraulic, thermo-chemical, and if one throws in radiation - thermo-radio-electrochemical models.

These are critical issues for the design of nuclear reactors and power systems in space applications. The designers are pushing the peak temperatures as high as possible, which means material challenges like never before, and operating times of up to 70,000 hrs (up from 61,000 hrs 20 years ago).

An then through in the variablity of the manufactured materials, components and integrated system. PerennialII
Gold Member
I voted yes. I find this a really difficult question to answer .... if there was an option to hit something else I'd probably use it as an option of escape .

If I think about the scientific method and approach this from that perspective I pretty quickly am hesitant to hit 'yes', due to the basic iterative and improving nature of everything, but on the other hand, I don't think you necessarily need to throw all the great PDEs of physics to the same basket. Rather, considering the "weight" of for example these marvellous models of nature I don't see a problem in placing them philosophically as a part of a global "ToE" and seeing them as a "specific" case or a facet of the uncovered truth .... that they are in a sense a part of some higher "understanding".

The limitations arising for example from boundary conditions etc. don't really turn it for me, somehow I see the "truth" or the "faith" element of these equations residing in the principles they're built upon, and the solid foundation leading to their derivation. A bit like considering principles of thermodynamics, although in a somewhat more strict sense (like when considering a validity of an equilibrium equation ... there is some inherent "reality" in it at least if you try to argue your way around it). .... and of course our $u_{n}$s are so close to $u_{r}$s with our current numerical methods anyways .

FredGarvin
I may have to clarify my response:

I voted no because of the wording you used in your poll question itself:
Clausius said:
Yes, they are the reality itself if we could integrate them with zero error and infinite power.
In those terms I do not agree that models will get to that level. I do however agree with your statement:
Clausius said:
But it would mean that N-S equations are not a model of reality, but they would tend asymptotically to reality.
i.e. they will get close, but never be the exact same as reality.

I hope I'm not stradling the fence on this one.

I voted NO. When we are trying to describe reality in terms of formula's we basically have two major issues at hand. First of all, if we want to simulate reality with 100% accuracy, we need to take into account all possible physical events that take place in a certain reality that we want to describe. This is quasi impossible though, we do a great job at reaching absolute accuracy. But just look at the way we handle the 'randomness' of nature, like in the case of an ordinary gass. We introduce concepts like average velocity of the constituent molecules and mean free path...Yes these models describe reality quite well but there always is this inaccuracy caused by the fact we cannot completely handle randmoness in a determinstic way...otherwise, would it still be randomness ??? This is also the argument that convinces me of the fact that we will never be able to create some kind of computer version of the human brain. How would we master the human spontenaity ?

The second major issue is of computational nature. We can have 'perfect' equations that are a pure manifestation of nature. The best example is the Schrodinger equation, being the QM-version of conservation of energy of QM-systems. In order to extract realtime info from this equation, we need to solve it. For a H-atom, the solution is perfect but that is basically all. In all other cases we need to resort to approximative ways out like Hartree Fock or DFT (Density functional theory)...The SE is something fantastic, it is just a pitty we cannot solve it perfectly :)

regards
marlon

Gold Member
Thanks Fred, PerennialII and Astronuc for being so brave for discussing with me... .

I have read carefully your replies. To say the truth I am proud of my own opinion. Leaving apart electromagnetic models, I have somehow a blind faith in Fluid Flow models. During my short academic experience, I have had time to get amazed of how extraordinary accurate one can represent a real flow with current methods.

PerennialII said:
The limitations arising for example from boundary conditions etc. don't really turn it for me, somehow I see the "truth" or the "faith" element of these equations residing in the principles they're built upon, and the solid foundation leading to their derivation.
In particular, in turbulent flow the most important issue when simulating numerically are the boundary conditions. The mechanisms of the onset of the turbulence are not well known nowadays. Although we see it in an hydrodynamic tunnel, there are lots of unconsidered effects which may collaborate to it inside an experimental installation. It is not a problem of the model but a problem of how we integrate it and with what boundary conditions. Such boundary conditions remain unknown for us. Engineers usually try to "provoke" externally the turbulence when simulating it numerically. For instance, if you try to compute a flow around a cylinder at very high Reynolds number with usual boundary conditions and N-S equations, you will see a laminar solution. Such solutions are numerically stable no matter how high is the Reynolds#, but they doesn't reflect a real phenomena. In order to establish the turbulence, one may have to do some tricks via new boundary conditions (i.e an small oscillation in the cylinder).
Maybe I am a bit biased because my support feeling to the N-S theory.

Anyway, the consequences of stating that $$u_n\rightarrow u_r$$ when $$N,t\rightarrow\infty$$ with exception of micromolecular effects, are very important. The theory would be valid from large scales to small scales of the turbulence (Kolmogorov scales). Then

i) the reality, at moderate scales (I am not including quantum effects), is completely deterministic. Every effect, every movement we can see around us, every solid deformation no matter how complex is the machine element, every fluid flow, all of them are determinated by mathematics. If you have voted "no", then you would escape saying that there remains effects which cannot be represented. But what are these effects? I only can imagine electrodynamic effects, which are of second order of importance at moderate scales. Everybody of us are engineers, we usually move in that scale. I think they could be neglected in first approximation without causing an important effect on the model.

ii) what more could we compute and simulate?. It is clear that complete analytical solutions are impossible to obtain for industrial problems. I am imagining if one could simulate the complete atmospheric dynamics of the Earth if we could have an extraordinary power of computation. You could think: it is impossible. But you are implicitly saying that there are effects not considered in N-S model. On the other hand, I think: yes, it would be possible, N-S equations are prepared and contain the most important effects for simulating it, but it would be very hard to find the appropriate boundary conditions of the model.

Gold Member
marlon said:
I voted NO. When we are trying to describe reality in terms of formula's we basically have two major issues at hand. First of all, if we want to simulate reality with 100% accuracy, we need to take into account all possible physical events that take place in a certain reality that we want to describe. This is quasi impossible though, we do a great job at reaching absolute accuracy. But just look at the way we handle the 'randomness' of nature, like in the case of an ordinary gass. We introduce concepts like average velocity of the constituent molecules and mean free path...Yes these models describe reality quite well but there always is this inaccuracy caused by the fact we cannot completely handle randmoness in a determinstic way...otherwise, would it still be randomness ??? This is also the argument that convinces me of the fact that we will never be able to create some kind of computer version of the human brain. How would we master the human spontenaity ?

The second major issue is of computational nature. We can have 'perfect' equations that are a pure manifestation of nature. The best example is the Schrodinger equation, being the QM-version of conservation of energy of QM-systems. In order to extract realtime info from this equation, we need to solve it. For a H-atom, the solution is perfect but that is basically all. In all other cases we need to resort to approximative ways out like Hartree Fock or DFT (Density functional theory)...The SE is something fantastic, it is just a pitty we cannot solve it perfectly :)

regards
marlon

Thanx for your reply. Reading it I realised you have employed the argument of randomness. How random is our reality?. We do know randomness role begins to have an important effect in atomic scales. Ironically, there is such Schrödinger model for this scale, so it seems this role is bounded too in some way. A mathematical expression, no matter it is representing a random event, is a constraint of the Nature. The quantum effects must yield this model too, so it seems there is something un-random inside the randomness!!.

I have to admit I knew the main via of escape for everybody of you would be quantum and random effects. My personal opinion is that the randomness in moderate large scale events is a second order effect. For describing a fluid flow or an structure bending I don't care about randomness. It seems to me all of it is perfectly and mathematically determinated.

It is an interesting discussion. Maybe, as you have taken the example of human brain to the foreground, our behavior seems too random because of the importance of quantum effects on brain conexions. Randomness seems to be insuflated via small scales in Nature, while moderate large scales by themselves seem to be deterministically determinated.

I voted no to the question if you want you model to represent reality. The reason is that you could never know what all the variables are within the system. And if you did you couldn’t keep them from changing. The best that you will do is creating a statistical model.

Models are tools to help us get an idea of what something may do. I have never seen any classical models (even electromagnetic models) that can precisely show what something is going to behave in reality. If we lived in an ideal world I might say maybe. :tongue2:

Gold Member
The final question ( I think I am going to open a new poll), would be:

"could the Reality have a complete mathematical model hidden somewhere?"

I mean:

"could be a complete closed theory of how a real event evolute, from large scales to small scales??"

An afirmative answer would mean the reality is mathematically modelable, from the randomness to the determinism.

Is the reality contained into a mathematical world?

(sorry for exposing my puzzles. I am becoming emotioned . By the way I am hearing Speed of Sound (ColdPlay) while writting this, so you know what kind of guy I am.... )

Last edited:
Integral
Staff Emeritus
Gold Member
I voted no pretty much for the reasons stated by Marlon. Our best models are approximations which include the major factors. We simply cannot accout for everything. For most purposes this is fine, but is not reality.

Gold Member
Thanx Integral and Spiro Z for your replies.

Your workhorse of both of you seem to be the "number of variables considered and the number of effects considered". Both of you argue we never could take all effects into account. Why do we have to do so?. There are effects which play an small or none role into an event.

Assume we have enough power to do the next computation.
Let's keep with the example of simulating a complete atmosphere dynamics of the Earth. I mean to compute a moderate large scale model. What kind of effects are the most important?

-Convective Effects
-Viscous Effects
-Compressibility Effects,
-Heat Transfer Effects,
-Electro Chemical Effects,

Well, I think we could simulate an entire climatic day with these effects, without considering any more extravagance. We do know that for example in the photosphere there are photochemical reactions into which electrodynamics could play an important role. But in the movement of a mass of air of cents of kilograms such theory will have a neglecting effect. Maybe if we look with a maginfying glasses to the computed concentration of Argon atoms in a cubic centimeter of atmosphere, it would result in a great error, but the global aspect would be enough to forecast the clima for the rest of the life (if we include the appropriate boundary conditions).

The main purpose of this reply is that for certain parts of our reality, the importance of some effects can be neglected, and so the argument about "you couldn't include all effects" has no sense.

EDIT:Anyway, the amount of current "effects" which sciencists know, and which could be computed simultaneously, is not an small one. I think we could be near 98% of reality, but nobody has planned to compute such a giant model. I am talking about the potential to do so.

Last edited:
Clausius2 said:
Is the reality contained into a mathematical world?

Anything can be contained in the mathamatical world. Finding reality in the math is the problem. PerennialII
Gold Member
Clausius2 said:
In particular, in turbulent flow the most important issue when simulating numerically are the boundary conditions. The mechanisms of the onset of the turbulence are not well known nowadays. Although we see it in an hydrodynamic tunnel, there are lots of unconsidered effects which may collaborate to it inside an experimental installation. It is not a problem of the model but a problem of how we integrate it and with what boundary conditions. Such boundary conditions remain unknown for us. Engineers usually try to "provoke" externally the turbulence when simulating it numerically. For instance, if you try to compute a flow around a cylinder at very high Reynolds number with usual boundary conditions and N-S equations, you will see a laminar solution. Such solutions are numerically stable no matter how high is the Reynolds#, but they doesn't reflect a real phenomena. In order to establish the turbulence, one may have to do some tricks via new boundary conditions (i.e an small oscillation in the cylinder).
Maybe I am a bit biased because my support feeling to the N-S theory.

Yeah, when pondering about this question I thought BCs wouldn't turn it, since typically BCs are solutions of other PDEs ... and as such the link to "reality" prevails. Similarly with respect to numerical accuracy, solution of many, e.g. hyperbolic PDEs in general terms, is kind of an "art form", but a "practical" problem of the numerical process.

Clausius2 said:
Anyway, the consequences of stating that $$u_n\rightarrow u_r$$ when $$N,t\rightarrow\infty$$ with exception of micromolecular effects, are very important. The theory would be valid from large scales to small scales of the turbulence (Kolmogorov scales). Then

i) the reality, at moderate scales (I am not including quantum effects), is completely deterministic. Every effect, every movement we can see around us, every solid deformation no matter how complex is the machine element, every fluid flow, all of them are determinated by mathematics. If you have voted "no", then you would escape saying that there remains effects which cannot be represented. But what are these effects? I only can imagine electrodynamic effects, which are of second order of importance at moderate scales. Everybody of us are engineers, we usually move in that scale. I think they could be neglected in first approximation without causing an important effect on the model.

ii) what more could we compute and simulate?. It is clear that complete analytical solutions are impossible to obtain for industrial problems. I am imagining if one could simulate the complete atmospheric dynamics of the Earth if we could have an extraordinary power of computation. You could think: it is impossible. But you are implicitly saying that there are effects not considered in N-S model. On the other hand, I think: yes, it would be possible, N-S equations are prepared and contain the most important effects for simulating it, but it would be very hard to find the appropriate boundary conditions of the model.

... the ripples do run deep .... as supposed to :!!) .... even thought in principle we would need to solve all the PDEs coupled for the whole universe .

PerennialII
Gold Member
Clausius2 said:
I have to admit I knew the main via of escape for everybody of you would be quantum and random effects. My personal opinion is that the randomness in moderate large scale events is a second order effect. For describing a fluid flow or an structure bending I don't care about randomness. It seems to me all of it is perfectly and mathematically determinated.

If we accept probability as a measure of (&link to) reality we're still doing good (and doing otherwise seems pretty much as difficult, if not more so).

Clausius2 said:
Thanx for your reply. Reading it I realised you have employed the argument of randomness. How random is our reality?. We do know randomness role begins to have an important effect in atomic scales. Ironically, there is such Schrödinger model for this scale, so it seems this role is bounded too in some way. A mathematical expression, no matter it is representing a random event, is a constraint of the Nature. The quantum effects must yield this model too, so it seems there is something un-random inside the randomness!!.

I don't really know how randomness is incorporated in QM. I mean, how do we mathematically model a total at random event ? I don't think assigning a probability of 1/2 does the whole job...

Besides, what would be the point to do so ?

I have to admit I knew the main via of escape for everybody of you would be quantum and random effects. My personal opinion is that the randomness in moderate large scale events is a second order effect. For describing a fluid flow or an structure bending I don't care about randomness. It seems to me all of it is perfectly and mathematically determinated.

Well, you demonstrated my point by saying that randomness in large scale events is a second order event. Here is the approximation. But i don't really agree with you, i mean you don't call classical gass dynamics a second order effect right? Well, it is just an approximation of randomness...which basically has been averaged. But in it's true nature, the constituent particle's motions are totally random...

marlon

What if we could come up with a system whereby its mathematical description could have no conceivable error? Would this example be a perfect reflection of reality? Or would there still be some gap between the linguistic, mental, mathematical world that conceives the system and the system itself that is described (a phenomenon vs. neumenon of sorts) ?

Consider, for example, the case of an apple--a difinitively defined object. Now take this apple and interact it with another apple, i.e. bring them into a localized space such that they may be considered parts of the same quantity. Now consider the equation:

$$1 + 1 = 2$$

Where "1" represents the quantity of unity of the class of objects known as "apples", "+" represents the interaction of bring two sets of quantities into a localized space such that they may be considered one quantity and "2" represents the behavior that has resulted. Then wouldn't the behavior of such a system be accurately described and exquisitely defined by this equation?... and with infinite precision? Why is it not then possible, at least in principle, that our knowledge of mathematics and physics and computing power become advanced enough and powerful enough to precisely define behaviors of more complex behaviors and define them with infinite precision?

By the way I voted the second option Last edited:
Claude Bile
I would have thought that the existance of discrete variables, such as those found in logic would have pointed to a definite yes. For example;

A + B = C

Where A and B are inputs to an OR gate and C is the output. A, B and C take on the values 0 or 1. Why doesn't this equation represent reality precisely?

Claude.

Tom Mattson
Staff Emeritus
Gold Member
I voted "No". Clausius, not one of the theories you mention--great as they are--is applicable in all domains.

Navier-Stokes? Let's see what happens when you apply it when the fluid is relativistic. Or better yet, let's see what happens when you apply it when the assumption of continuity doesn't hold.

Structural Mechanics? Let's see what happens when you apply that theory to predict beam defelctions in a spacetime in which Euclidean geometry is no longer valid.

Maxwell's Equations? Let's see what happens when you try to use them to explain pair production.

Analytical Mechanics? Let's see what happens when you try to derive the intensity distribution of electrons passing through a Young's 2-slit apparatus.

The theories listed above are not even cutting edge theories, inasmuch as their domains of applicability are known and have been surpassed by other theories already. And no one would be so bold as to say that the current cutting edge is the end.

I voted yes. I think you can never get a perfect model, but the model you use perfectly describes what it takes into account. I don't believe the fault is in the model of a specific event, but the fact that the model doesn't take into account all of the variables.

µ³ said:
Consider, for example, the case of an apple--a difinitively defined object.
An apple is not definitively defined; the atoms making up the apples are what make up the "apple" sets. In your example the sets don't match.
Claude Bile said:
I would have thought that the existence of discrete variables, such as those found in logic would have pointed to a definite yes. For example;

A + B = C

Where A and B are inputs to an OR gate and C is the output. A, B and C take on the values 0 or 1. Why doesn't this equation represent reality precisely?
This is a result, it don't say anything about the "action". And there is a possibility that 1+1 = 0, when the gate glitches.

Claude Bile
Logic equations are often represented by electronic circuits, but they don't have to be. Consider a statement with two conditions, for example;

I got up before 7am and went to bed after 9pm yesterday.

I know that this statement is true if and only if both conditions (me getting up at 7am and me going to bed after 9pm) are true. Such logic is not susceptible to electronic glitches.

Gold Member
Tom Mattson said:
I voted "No". Clausius, not one of the theories you mention--great as they are--is applicable in all domains.

Navier-Stokes? Let's see what happens when you apply it when the fluid is relativistic. Or better yet, let's see what happens when you apply it when the assumption of continuity doesn't hold.

Structural Mechanics? Let's see what happens when you apply that theory to predict beam defelctions in a spacetime in which Euclidean geometry is no longer valid.

Maxwell's Equations? Let's see what happens when you try to use them to explain pair production.

Analytical Mechanics? Let's see what happens when you try to derive the intensity distribution of electrons passing through a Young's 2-slit apparatus.

The theories listed above are not even cutting edge theories, inasmuch as their domains of applicability are known and have been surpassed by other theories already. And no one would be so bold as to say that the current cutting edge is the end.

First of all thanks for your reply and for Whozum, Claude and mu^3 replies too.

Look at the picture I have attached. I have tried to represent the internal error of a calculation $$E$$ done with N-S equation versus the characteristic scale of the event $$S$$. The bolded line represents the real world, data obtained by experimentation, at large, medium and small scales. It would have zero error. The thinner line represents the numerical result by computing the complete N-S model. Let's call $$DS$$ the characteristic interval of scales in which engineers move (typical human scales). For $$S$$ such as it is smaller than anyone $$S$$ contained in the interval $$DS$$, there will be represented effects of quantum mechanics, small size effects. For $$S$$ such as it is greater than anyone $$S$$ contained in the interval $$DS$$, there will be represented effects of relativistic flow, large size effects (this is not truly correct, because it would depend on flow velocity also). You can see there are deviations in both sides respect the real world. But let's see in the center of the interval. There the behavior of N-S model is almost exact if it is computed correctly and with a proper computer power. This entire thread is directed towards considerations to this typical interval. Going to the physics borderlines is a via of escape. The fact is that the interval $$DS$$ is well mathematically defined, and randomness plays not an important role in it, except the bounded randomness of turbulence.

I don't think N-S has been surpassed by other theories in its domain of application. Which ones?. There must be an entire community of guys losing their time computing these equations, please let them know it.

Tom Mattson
Staff Emeritus