Ultimate question: Why anything at all?

  • Thread starter Thread starter bohm2
  • Start date Start date
AI Thread Summary
The discussion centers on the philosophical question of why there is something rather than nothing, highlighting the paradox of existence. Weinberg notes that while quantum mechanics provides a framework for understanding reality, it does not answer why these laws govern our universe. The argument suggests that with infinite possibilities, the probability of nothingness existing is effectively zero, implying that existence is more probable than non-existence. Participants express differing views on the implications of this reasoning, with some arguing it leads to nihilism, while others see it as a fundamental inquiry into the nature of reality. Ultimately, the conversation reflects on the complexity and depth of the question, emphasizing that it remains largely unanswerable.
  • #201
apeiron said:
This is a view of epistemology which I believe is quite wrong. All perception is modelling, never direct experience. The zen idea is mystical for claiming otherwise. You can whack yourself over the head as much as you please, but it won't change things.

I agree it is modeling, but the issue is how spontaneous is the modeling. The less spontaneous the more abstract it becomes and the less aware.

I often compare it to learning how to play the piano. At first you have to study different things, but eventually the idea is to play more spontaneously. Either one without the other makes for a bad pianist.
 
Physics news on Phys.org
  • #202
wuliheron said:
I agree it is modeling, but the issue is how spontaneous is the modeling. The less spontaneous the more abstract it becomes and the less aware.

I often compare it to learning how to play the piano. At first you have to study different things, but eventually the idea is to play more spontaneously. Either one without the other makes for a bad pianist.

This is crazy. You are advising us to be unthinking as philosophers or scientists, to just act out of acquired habit.

There is a reason why Greek philosophy was eventually so fruitful, Eastern philosophy rather less so.

You are welcome to an opinion, to a position of faith or mysticism, but if you want to argue for something as an alternative way to do philosophy, you should move it to a separate thread.

Your pianist analogy is all muddled anyway. Practice allows for the unthinking, but the whole point then is to clear the way for continued thinking at a higher level of organisation. I can cite the relevant literature from creativity studies and neuroscience if you choose to open a separate thread.
 
  • #203
apeiron said:
This is crazy. You are advising us to be unthinking as philosophers or scientists, to just act out of acquired habit.

There is a reason why Greek philosophy was eventually so fruitful, Eastern philosophy rather less so.

You are welcome to an opinion, to a position of faith or mysticism, but if you want to argue for something as an alternative way to do philosophy, you should move it to a separate thread.

Your pianist analogy is all muddled anyway. Practice allows for the unthinking, but the whole point then is to clear the way for continued thinking at a higher level of organisation. I can cite the relevant literature from creativity studies and neuroscience if you choose to open a separate thread.

I never said we should just act out of acquired habit, and it is you who keeps trying to change the subject with these straw man arguments against everything I say and now even biased statements against Asian philosophy.
 
  • #204
jambaugh said:
That is because the question is ill posed. "Why" questions have three contexts, causality, explanatory and purpose:
"Why did the bridge collapse?"
"Why does lead become superconducting below a critical temperature?"
"Why did you slap me?"
  1. You can't invoke causality outside the domain of existence and indeed doing so is a category error. Causation links events. Existence isn't an event it is a collection of events. Causality works within this collection not upon this collection.
  2. It may be instructive to try to explain the form of existence such as is done in physics but there are limits there.
  3. Questions of purpose per-suppose a purpose holder. If I step on a rake in the dark and ask "why did you hit me" I'm asking the purpose behind an accidental event. Before resolving purpose one must resolve the intentional vs accidental nature of the subject. Typically I see questions of purpose in attempts to deduce the existence of God. "There must be a God, else why do we exist?" but these are circular arguments.
    Premise:The "why" question is valid i.e. there is a God ; Conclusion:There is a God. (Personally I am agnostic in that I believe this is a question of faith not deduction.)

I think it is instructive to consider for the moment the mundane topic of interval notation in mathematics. I can represent a bound interval a \le x < b with the notation x\in [a,b). We then extend this bit of language to include unbounded sets by defining a symbol \infty as a place-holder for the absence of a bound. x \in [a,\infty) \equiv a\le x < \infty \equiv a \le x.
And even express: x \in \mathbb{R} =(-\infty,\infty).
But we may then make the error of objectifying this null symbol as if it represented an actual real number. "There must be a number \infty"! This symbol isn't something (in this context) it is a place-holder for nothing when we use a language format which requires this be made explicit.

Now in mathematics we can of course invent infinite "numbers" and treat them as object. But math is a game of mental construction, not in and of itself a study of nature.

We must be careful about similar constructs in philosophy "first cause" "why everything?" etc. should be parsed for their implicit assumptions before we attempt resolving answers.

Thanks for the interesting and informative response. Re your (3) ..

But the term 'accidental' is itself as circular. In essence, it means 'an event' - cause of which is unknown (to you or me). But cause nonetheless. Bringing it back to the same questioin.
 
  • #205
apeiron said:
Aristotle said there are four "why" questions. What you call "causality" here is just effective cause. There is also material, formal and final cause in his analysis. So a bridge exist because someone made it, it is made of something, it has some shape, and there was a reason that caused it to get made.

Reductionists want to reduce all these sources of causality to just the question of effective cause. Though they also need some kind of local material stuff - a substance - that can carry this effective cause as a property or force.

The "why anything" question then leads to a further problem of first cause - primum movens. And a reductionist will read this as the call to find some ultimate kind of effective cause (such as a creating god).

But the point of having a more complex model of causality such as Aristotle's is that primum movens can also be a complicated "four causes" story. As some of the arguments presented in the thread illustrate.



Your claim here rest on the assumption that effective cause is "the whole of causality". And that reality is a mereological bundle.

A holistic view would agree that all causes would have to be internal to "existence". A world would have to be ultimately self-causing - and this is a problem!

But there is a richer arsenal of causality available. The holistic view would also be a process view - worlds would develop and endure, or persist rather than exist.

This is in turn what leads to the necessity for a vague~crisp distinction. It underpins a view of holistic causality in which a process can arise from "nothing".



Again, what reductionists really want to get rid of is teleological cause. And it is easy to supply examples which make it seem obvious the world is just blindly materialistic, absent of purpose, goals, will or meanings, and only humans are different in this regard.

But science still finds it hard not to frame its laws of nature in teleological fashion (thou shalt evolve, thou shalt dissipate, thou shalt gravitate, thou shalt follow the least mean path.)

And a systems thinker will argue that the correct approach to human purpose and meaning is to generalise it. You can "water it down" so that you have a hierarchy of final cause such as
{teleomaty {teleonomy {teleology}}}, or in more colloquial language, {propensity {function {purpose}}}. See for example - http://cosmosandhistory.org/index.php/journal/article/viewFile/189/283

I mentioned already the connection between the problem of final cause and the problem of wavefunction collapse. It was not accident that early interpretations wanted to put the cause in the mind of the human observer, more recent ones are trying to put it out in a thermal environment or invoking retrocausality from future constraints.

So this is a very live subject even in science.

The thing to beware of is not turning final cause into another super-species of effective cause. It can't be merely "triggering event" seen on a larger scale (which is the kind of notion of a blue touch paper God you have in mind). It has to be something else, otherwise there is no need to distinguish it as a further aspect of causality.

So final cause needs to be identified with global constraints, downwards causality - some way in which the ends do justify the means.

I would agree that this is the least well developed part of our ideas about causality as yet. But that is what makes it interesting I guess. And asking the "why anything" question is particularly instructive in this regard.



Exactly, we must dichotomise to clarify. To be able to model causality, we must divide it suitably.

And here there may actually be novel metaphysics. The greeks did divide things into chance and necessity. But we know that randomness and determinism are still problematic concepts in science. What is a fluctuation really?

There is a general distinction of reality into its local degrees of freedom and global constraints that seems to work. But the story does not seem quite in focus yet.



Yes, because they are actually just attempts to use the notion of effective cause to explain everything.



Correct. Even in metaphysics, we are constructing models of causality. We are breaking things down in ways that seem to work, seem to be true, but we must bear in mind that they still are just models and so may bear secret traces of their makers.

The great yawning silence and banging of heads on tables that usually greets the "why anything" question is the sound of people confronting the limitations of their conceptual tools.

Which is why it is a great question. It forces you to find better conceptual tools.

Which is why it is a great question. It forces you to find better conceptual tools.

A great question (why anything at all) - quite so.

And, having forced yourself to find and employ those better conceptual tools, as you admirably do here, what have YOU constructed with them ?
 
  • #206
A new review paper from Smolin gives an idea about how philosophically-minded physicists are thinking about the "why anything" problem of cosmogenesis.

A perspective on the landscape problem, Lee Smolin, 15 Feb 2012
http://arxiv.org/pdf/1202.3373.pdf

Using the arguments of Peirce and others (Wheeler, Dirac), Smolin says the landscape problem of string theory is in fact a general issue for any approach to a theory of everything (ToE) because the questions of development and evolution always break into two parts - the material basis for change (the local degrees of freedom) and then the global constraints that pick out some particular outcome from those degrees of freedom.

So the fact that string theory ended up with an open-ended infinity of possible solutions is no surprise. The problem next is to identify the separate dynamical principle that might break this unlimited symmetry.

Here Smolin attempts to put this systems view of causality centre-stage...

But the strongest reason to expect the landscape problem is not an anomaly of string theory is that it has deep historical roots, which I sketch in the next section. It might have been anticipated a long time ago-and indeed it was. These historical roots of the landscape problem suggest that the landscape problem was bound to occur as physics progressed. As I will argue, it is an inevitable consequence of the general form we have assumed for physical theories since Newtonian mechanics.

He then introduces the idea that laws, that is global constraints, have to evolve. They have a history of development and were not crisply "there" at the beginning...

...Dirac had proposed that laws of physics may evolve:
At the beginning of time the laws of Nature were probably very different from what they are now. Thus, we should consider the laws of Nature as continually changing with the eoch, instead of as holding uniformly throught space-time[22].

...When logical implication is insufficient, the explanation must be found in causal processes acting over time. This was understood clearly more than a century ago by Charles Sanders Pierce, the founder of the school of philosophy called American pragmatism:
To suppose universal laws of nature capable of being apprehended by the mind and yet having no reason for their special forms, but standing inexplicable and irrational, is hardly a justifiable position. Uniformities are precisely the sort of facts that need to
be accounted for. Law is par excellence the thing that wants a reason. Now the only possible way of accounting for the laws of nature, and for uniformity in general, is to suppose them results of evolution[23].

Then Smolin makes an insightful point about the "first moment" for any developing system being organised by its constraints.

Time=0 is the singularity (unlimited possibility) and so constraints can't even begin to be present until some fraction of time has gone by. The constraints must lie in the future of what exists (even if by the tiniest fraction).

This is crucial to the point of the "why anything" question of whether the existence of things is caused from the outside (as by some earlier effective cause such as a creating god), or whether existence can be self-causing, bootstrapping out of unlimited potential.

Well in fact Smolin equivocates on this. He allows that the constraints might be present at t=0. But then his further comments on the dynamical emergence of constraints are a strong argument for the "shortly after" alternative...

We can also apply Leibniz’s principle of sufficient reason to the problem of the selection of the initial conditions of the universe. It is a fact that in general relativity - and presumably in any field theory of gravitation - there are an infinite number of solutions of the field equations which have an initial singularity. To apply general relativity to cosmology, it is then necessary to give the initial conditions at - or shortly after- the singularity. The choice of initial conditions requires explanation. If we are optimistic and believe all questions about the universe are answerable, then that explanation must satisfy the principle of sufficient reason. If no sufficient reason can be given within a given theory, then that theory must be wrong.

Then for me Smolin goes astray because he argues that evolutionary stories are simply historical (which is in fact quite true of the evolutionary part of evo-devo - selection is contingent - but not necessarily of the developmental part, as development show a mathematical regularity in its self-organisation.)...

What kind of explanations can count as sufficient reason for a law or theory? As I argued before, two general kinds of explanations that could be advanced to account for a state of affairs. Reasons can be logical or they can be historical. They both may serve, but they have very different consequences for the methodology of science. This is because logical explanations can be complete while our knowledge of the past is always incomplete.

This then leads Smolin to the perhaps unnecessarily pessimistic view that...

...in some circumstances, the demand for sufficient reason must result in a confession of ignorance, when causal chains are pushed back into the past to the point where our present knowledge of the past ends. This is better then proclaiming first movers or initial states which are not subject to further explanation in terms of their pasts, and so cannot be further improved as our observations of the past improve.

Then Smolin swings back to the failure of the reductionist Newtonian paradigm...

Because this framework has been so successful when applied to the small subsystems of the universe, it appears almost obvious that when we come to the task of developing a cosmological theory, we should just scale it up to include the whole universe in the state space, C. However, as successful as it has been, this schema for physical theories cannot be applied to the universe as a whole. There are several distinct reasons for this...

...Any theory formulated in the Newtonian paradigm will have an infinite number of solutions. But, the universe is unique-so only one cosmological history is physically real. The Newtonian paradigmis then very extravagant when applied to cosmology because it not only makes predictions about the future of the one real universe, it offers predictions for an infinite number of universes which are never realized. The Newtonian paradigm cannot explain why the one solution that is realized is picked out from the infinite number of possibilities...

...One way to express the cosmological fallacy is through the following cosmological dilemma. The Newtonian paradigm expresses the forms of all the laws we know which have been thought of as exact. Nonetheless, every law formulated and verified within the Newtonian paradigm can only apply to a bounded domain and hence is approximate.

The rest of the paper then goes off into a recap of the familiar bounce and eternal inflation stories that Smolin's Darwinian perspective - the old-hat Modern Evolutionary Synthesis of the 1940s - favours.

As said, a properly modern evo-devo approach would put the focus squarely on the issue of the development of constraints rather than the secondary matter of the evolution of constraints.

But still the paper shows that Smolin continues to lead the charge when it comes to thinking about how to think about the scientific modelling of cosmogenesis. How to answer the "why anything" question with a response other than "just because". :wink:
 
Last edited:
  • #207
alt said:
Thanks for the interesting and informative response. Re your (3) ..

But the term 'accidental' is itself as circular. In essence, it means 'an event' - cause of which is unknown (to you or me). But cause nonetheless. Bringing it back to the same questioin.
Not circular and not about known or unknown cause. It is used to qualify absence of purposeful intent. A raindrop may cause a pebble to fall instigating an avalanche. Or I may decide to set one off just as my enemy is passing the road beneath. One event is accidental the other intentional. It isn't an issue of cause or lack of cause but intent or lack of intent.

A believer in an omnipotent, omniscient God would reject the possibility of accidental events all together (every leaf that falls, etc.) But they cannot then, after the fact, reverse the implication, saying the impossibility of accidental events proves God's existence. That is indeed circular, the two assertions being equivalent.

If however you begin with the possibility (as in lack of asserted impossibility) of both accidental and purposeful causes and ask the question the it may be valid to inductively argue the existence of God from the existence of life, if one can show it it too improbable, even in the scale of the size and age of the universe, to be accidental. Valid in form but not, I believe valid under analysis. I've seen such arguments but they typically misrepresent physical assumptions (most often misapplying thermodynamic principles).

I stray from the point here but only as a demonstration of the use of "accidental" in a context.
 
  • #208
The question is though 'how do you define intent?'

Do you think that there is a possibility that our actions are actually 'controlled' to some degree where we think otherwise?

In other words: is the feeling of 'free-will' masking an underlying hidden order?

I do like how you have made your internal thoughts highly concise on these forums and I thought it would be important to have a discussion on how you define 'accidental' vs 'intentional' in a more refined matter (i.e. mathematical).

Is the 'intentional' (like human intent) purely probabilistic while the accidental being completely deterministic (which may no doubt be chaotic)?

How are you willing to explicitly qualify the argument for the intentional being purely probabilistic and the 'accidental' being somewhat more deterministic if this is the case?

The reason I bring this up is because of our narrow scope of looking at things at human beings. Many people can't even deal with systems with more than say 10 variables and that's for a complex system! A lot of the general population finds it hard to deal with more than 5!

If we have a system involving millions if not billions (probably a dozen more orders of magnitude higher than that), then with our limited capacity it would make sense that we use a probabilistic framework since it reduces the system to a level that is able to grasped with our minds at this current time.

Pythagorean said this in another thread (I'll dig it up if you want) stating that (and I'm paraphrasing here) "Determinism and probability are not incompatible with one another" and in the context of the above statement I have no doubt that his statement is correct.
 
  • #209
apeiron said:
Aristotle said there are four "why" questions. What you call "causality" here is just effective cause. There is also material, formal and final cause in his analysis. So a bridge exist because someone made it, it is made of something, it has some shape, and there was a reason that caused it to get made.
As a matter of semantics Aristotle's αἴτιον can be translated as "cause" but I prefer a narrower definition of the common word. Rather αἴτιον="the why of something".

I also find this breakdown overly objective. (deterministic) Cause should not be ascribed to objects but to events. That being my position, I see "material cause" losing equal status with the other types. Rather than parsing the causes of a bridge, one would consider the cause for the event of its coming into existence or its sustained existence. "final" or "telelogical" cause then might be ascribed to its creation, or even if a "natural" bridge created accidentally, ascribed to its maintenance. This is clear when the bridge is used as a bridge and not when you so label a tree fallen over a stream absent its utility.

As to formal cause, I'm not sure that applies outside an ontological bias, except in the modified phenomenological form I described as "explanitory". Constraints on phenomena we think of as natural laws, e.g. conservation laws, relativity principles, thermodynamic laws, and such.

Your claim here rest on the assumption that effective cause is "the whole of causality". And that reality is a mereological bundle.
I shy from the term "reality" except as an object model we use... i.e. it is a mereological bundle, or rather it is a collection of mereological bundles these being the objects of reality bundling the categorical phenomena we observe. Stepping aside of the loaded word "reality", and replacing it with "actuality" (that which happens), I'm not "rejecting" actively so much as narrowing the semantics of the word "cause" as I'm using it. This semantic bias comes from how the term is used in physics.

But there is a richer arsenal of causality available. The holistic view would also be a process view - worlds would develop and endure, or persist rather than exist.
Here, you're extending semantically the term. A holistic process view may assert emergence of higher order phenomena but it still resides in the same causal framework as the pre-emergent world. We can speak of caustic soda causing a change of pH without invalidating the level of (effective) causation below the point of emergence (chemistry) which "caustic" and "pH" have been given meaning. There is still the interaction of the fundamental particles and forces at work. Likewise up the emergent chain to bridges and murder's weapons. One cannot ascribe meaning to a weapon based on the configuration of atoms, rather its teleological purpose defines it. It none the less obeys the same fundamental physical laws, cause and effect, as does any elementary particle in physics. The hand that bludgeons the brain is applying a force, the entropy of the brain is being (fatally) increased, the heat engine of the victim's body is being permanently interrupted, etc.

Again, what reductionists really want to get rid of is teleological cause.
There is a middle ground. One can be a reductionist about "effective cause" while understanding and giving full weight to emergence. Chemistry isn't just physics, biology isn't just chemistry, and willful action (teleological cause) is not just biology. Yet each level emerges from and has effective cause wholly within the other. I can believe love as a phenomenon has no mystical component beyond the material phenomena physicists study and yet understand that reducing love to a series of particle interactions is totally meaningless,... and of course still believe in love itself.

And also don't confuse a semantic misalignment with a disagreement in opinion. Don't label someone a reductionist because they mean something different than you do when they use a particular word.
...
Which is why it is a great question. It forces you to find better conceptual tools.
Well. I don't always see it so forcing people. I can see possibly the merit as with e.g. paradoxes in physics, to emphasize and make explicit conceptual errors. But I have more often seen such being used by the questioner to excuse their rejection of the effort to understand. Rather than clarifying the question they reject belief in the process of questioning and take the pat answer, e.g. mysticism.
 
  • #210
jambaugh said:
There is a middle ground. One can be a reductionist about "effective cause" while understanding and giving full weight to emergence. Chemistry isn't just physics, biology isn't just chemistry, and willful action (teleological cause) is not just biology. Yet each level emerges from and has effective cause wholly within the other. I can believe love as a phenomenon has no mystical component beyond the material phenomena physicists study and yet understand that reducing love to a series of particle interactions is totally meaningless,... and of course still believe in love itself.
I find this topic both interesting and confusing. A reductionist can always argue that the reason why full reduction (really unification) is not possible at present is because the "foundational" branch (e.g. physics) is not yet complete. Or due to our own cognitive limitations ( limitation of observer). Here's the basic argument:
Where there is discontinuity in microscopic behavior associated with precisely specifiable macroscopic parameters, emergent properties of the system are clearly implicated, unless we can get an equally elegant resulting theory by complicating the dispositional structure of the already accepted inventory of basic properties...such hidden-micro-dispositions theories are indeed always available. Assuming sharply discontinuous patterns of effects within complex systems, we could conclude that the microphysical entities have otherwise latent dispositions towards effects within macroscopically complex contexts alongside the dispositions which are continuously manifested in (nearly) all contexts. The observed difference would be a result of the manifestation of these latent dispositions.
Thus, a reductionist can claim that because we still lack these micro-dispositions (e.g. physics is not completed) strong emergence with its dualist flavour is really an illusion.

Emergent Properties
http://plato.stanford.edu/entries/properties-emergent/

But others suspect that the non-locality and non-separability/contextuality implied by Bell's, Kochen–Specker (KS) theorem, etc. can be interpreted as a good argument for strong emergence and bi-directional causality as argued here:
The classical picture offered a compelling presumption in favour of the claim that causation is strictly bottom up-that the causal powers of whole systems reside entirely in the causal powers of parts. This thesis is central to most arguments for reductionism. It contends that all physically significant processes are due to causal powers of the smallest parts acting individually on one another. If this were right, then any emergent or systemic properties must either be powerless epiphenomena or else violate basic microphysical laws. But the way in which the classical picture breaks down undermines this connection and the reductionist argument that employs it. If microphysical systems can have properties not possessed by individual parts, then so might any system composed of such parts...

Were the physical world completely governed by local processes, the reductionist might well argue that each biological system is made up of the microphysical parts that interact, perhaps stochastically, but with things that exist in microscopic local regions; so the biological can only be epiphenomena of local microphysical processes occurring in tiny regions. Biology reduces to molecular biology, which reduces in turn to microphysics. But the Bell arguments completely overturn this conception.
For whom the Bell arguments toll
http://faculty-staff.ou.edu/H/James.A.Hawthorne-1/Hawthorne--For_Whom_the_Bell_Arguments_Toll.pdf
 
Last edited:
  • #211
bohm2 said:
I find this topic both interesting and confusing. A reductionist can always argue that the reason why full reduction (really unification) is not possible at present is because the "foundational" branch (e.g. physics) is not yet complete. Or due to our own cognitive limitations ( limitation of observer).
Again I suggest a third road. It is a matter of complementarity of observables. E.g. the biological definition of "alive" for Schrodinger's cat is complementary to observing superpositions of "cat states" (which would entail statistical experiments on large numbers of identically prepared cats, which in turn would need to be cooled to near absolute zero.)

The physics can be complete (q-complete i.e. maximal) but micro-scale complementarity can also manifest on the macro-scale. I think this may be a fundamental aspect of emergence.
Thus, a reductionist can claim that because we still lack these micro-dispositions (e.g. physics is not completed) strong emergence with its dualist flavour is really an illusion.
But others suspect that the non-locality and non-separability/contextuality implied by Bell's, Kochen–Specker (KS) theorem, etc. can be interpreted as a good argument for strong emergence and bi-directional causality as argued here:
Don't get me started on Bell, EPR and non-locality. Non-separability yes, but locality in the premise of Bell's derivation is simply a means to assure a reasonable assumption of non-causal interaction between measurement processes. Focusing too much on this causation business (typically due to improper reification of the system representation) distracts from the real implications of QM entanglement.

It is again an issue of complementarity. Here it is between the observed q-correlation we define as "entanglement" and the separation of the composite system into a specific pair of component systems.

This discussion has me thinking then... of the possibility of putting some rigor into some definitions in the emergence camp by invoking complementarity. Hmmm...
 
  • #212
jambaugh said:
I stray from the point here but only as a demonstration of the use of "accidental" in a context.

The issue of spontaneous vs purposeful action is in fact very on point when it comes to "why anything" cosmogenesis.

The world seems full of spontaneous happenings. Quantum fluctuations for a start.

The reductionist model of causality finds them difficult to explain. If every event must have a prior cause, then nothing can be chance, everything must be determined. Spontaneity would have to be an illusion due to our lack of knowledge of the "hidden variables".

But a systems model of causality puts it the other way round. The difficult lies more in preventing spontaneity. :smile:

The presumption is that reality begins with a potential - a set of degrees of freedom. And then constraints are imposed on this generalised freedom so as to limit its actions to more determinate paths. Dynamism begins unbound, going off in all directions (and so failing to show any particular direction). Constraints then organise this world so that the intrinsic spontaneity is channeled. It is still fundamentally there, but starts to behave in predictable fashion.

It is rather like the way a car engine explodes a gas vapour in all directions, but all the energy gets constrained to have a definite direction.

In QM language, you have indeterminacy and then the addition of constraints that "collapse" that indeterminacy - or more properly, restrict it below some chosen epistemic threshold.

We can then take this basic model of spontaneous action - uncaused fluctuations in a state of unformed potential - as the vague ground out of which a universe could develop.

Here is Peirce outlining his own philosophy on this back in 1891...
http://www.cspeirce.com/menu/library/aboutcsp/brier/mysticism.pdf

I may mention that my chief avocation in the last ten years has been to develop my cosmology. This theory is that the evolution of the world is hyperbolic, that is, proceeds from one state of things in the infinite past, to a different state of things in the infinite future.
The state of things in the infinite past is chaos, tohu bohu,21 the nothingness of which consists in the total absence of regularity. The state of things in the infinite future is death, the nothingness of which consists in the complete triumph of law and absence of all spontaneity.
Between these, we have on our side a state of things in which there is some absolute spontaneity counter to all law, and some degree of conformity to law, which is constantly on the increase owing to the growth of habit.(Vol. 8, p. 317)
 
  • #213
jambaugh said:
Don't get me started on Bell, EPR and non-locality. Non-separability yes, but locality in the premise of Bell's derivation is simply a means to assure a reasonable assumption of non-causal interaction between measurement processes. Focusing too much on this causation business (typically due to improper reification of the system representation) distracts from the real implications of QM entanglement.
I'm not sure I understand but non-separability arguably has the same consequences as predictions of QM depend on the 3N-dimensional space that get lost in the 3-dimensional representation (e.g. information about correlations among different parts of the system, that are experimentally observed are left out):
Not all the relevant information about the sub-system is contained in its density operator (obtained by partial tracing). A fraction of this information is missing, and it is contained only in the state vector of the overall system. Any separate description of parts, and then any dual description of parts and whole, then looks artificial. As a consequence, the concept of inter-level causation looks nonsensical in the highly holistic domain described by quantum mechanics. Since there is no way to separate the states of the parts from the state of the whole, it sounds absurd to call one the cause and the other the effect, as if they were two different things.
Downward Causation without Foundations
http://michel.bitbol.pagesperso-orange.fr/DownwardCausationDraft.pdf
 
  • #214
jambaugh said:
This discussion has me thinking then... of the possibility of putting some rigor into some definitions in the emergence camp by invoking complementarity. Hmmm...

Indeed this is fundamental in my view. It is the absolute key. And there is never a post where I'm not saying this. :-p
 
  • #215
jambaugh said:
Not circular and not about known or unknown cause. It is used to qualify absence of purposeful intent. A raindrop may cause a pebble to fall instigating an avalanche. Or I may decide to set one off just as my enemy is passing the road beneath. One event is accidental the other intentional. It isn't an issue of cause or lack of cause but intent or lack of intent.

Ok - let's go with 'intent or lack of intent', and extrapolate your examples.

A raindrop may cause a pebble to fall instigating an avalanche .. (lack of intent) then the rescue services swing into gear to save lives (intent).

I may decide to set one (avalanche) off just as my enemy is passing the road beneath .. (intent) because an accidental fire (lack of intent) is burning my food supply and I will most likely starve otherwise.

Both cases are legitimate. Intent from lack of intent. Lack of intent from intent.

If you accept the scientific position, can you really ascribe any special quality to your intentions ? Are they not a result of natural forces - merely an extension of the same principals that govern the raindrop causing the pebble to fall, causing the avalanche ?

Is there something special about our intentions ? Intelligence perhaps ? Caused by an unintentional, unintelligent Big Bang (no God) ? Or was it intelligent and intentional (God) ? Or don't we know, are not sure ? Bringing it back to the ultimate question - why anything at all.

A believer in an omnipotent, omniscient God would reject the possibility of accidental events all together (every leaf that falls, etc.) But they cannot then, after the fact, reverse the implication, saying the impossibility of accidental events proves God's existence. That is indeed circular, the two assertions being equivalent.

Yes, I don't disagree. 'God / not God' is not the intent (lol) of my involvement here.

If however you begin with the possibility (as in lack of asserted impossibility) of both accidental and purposeful causes and ask the question the it may be valid to inductively argue the existence of God from the existence of life, if one can show it it too improbable, even in the scale of the size and age of the universe, to be accidental. Valid in form but not, I believe valid under analysis. I've seen such arguments but they typically misrepresent physical assumptions (most often misapplying thermodynamic principles).

I stray from the point here but only as a demonstration of the use of "accidental" in a context.

I personally think that the word 'accidental' and it's fluid use thereof, goes to the heart of the context, and the point (the OP) of this thread.
 
  • #216
bohm2 said:

Thanks for the pointer to that paper. Another good exposition of the systems perspective.

I agree with pretty much all Bitbol's approach except that I think he needs to add the notion of the epistemic cut to extend causality to semiotically-organised systems.

His focus is on instances where there is downward effective cause (rather than just downward constraint). For a system to actually choose its state in this fashion, it needs some kind of internal machinery of control.

But otherwise, it is nice to see a systems take on QM, and then the generalisation of that view of causality.
 
  • #217
apeiron said:
The issue of spontaneous vs purposeful action is in fact very on point when it comes to "why anything" cosmogenesis.

The world seems full of spontaneous happenings. Quantum fluctuations for a start.

The reductionist model of causality finds them difficult to explain.
Doesn't the word spontaneous refer to unpredictable events? Aren't they called spontaneous because we can't predict or precisely explain them? Nevertheless, even so called spontaneous events are differentiated wrt, and predictable within certain bounds, wrt certain antecedent conditions, even wrt quantum experiments.

apeiron said:
If every event must have a prior cause, then nothing can be chance, everything must be determined. Spontaneity would have to be an illusion due to our lack of knowledge of the "hidden variables".
Determinism is an assumption. If one assumes it, then lack of explanatory or predictive ability is due to ignorance.

The assumption of indeterminacy doesn't seem to fit with our observations of the basically orderly, and predictable, evolution of the world, the universe.

apeiron said:
But a systems model of causality puts it the other way round. The difficult lies more in preventing spontaneity. :smile:

The presumption is that reality begins with a potential - a set of degrees of freedom.

And then constraints are imposed on this generalised freedom so as to limit its actions to more determinate paths.
Degrees of freedom wrt what? Where do the constraints come from?

In a certain view, it (everything, anything) starts with (is ultimately attributable to) a fundamental wave dynamic ... something that the behavior of any and all disturbances in any and all media at any scale have in common.

This fundamental wave dynamic is what constrains the evolution of the whole and determines the emergence of discernibly bounded systems (ie., particles and objects).

apeiron said:
Dynamism begins unbound, going off in all directions (and so failing to show any particular direction). Constraints then organise this world so that the intrinsic spontaneity is channeled. It is still fundamentally there, but starts to behave in predictable fashion.
In a certain view, a fundamental dynamics determines the bounds and possible evolution. Spontaneity isn't, in this view, intrinsic or fundamental. It just refers to our ignorance.

Why not assume a fundamental wave dynamic and see where it leads?

And by the way thanks for your and others' replies to my previous questions/comments.
 
Last edited:
  • #218
bohm2 said:
I'm not sure I understand but non-separability arguably has the same consequences as predictions of QM depend on the 3N-dimensional space that get lost in the 3-dimensional representation (e.g. information about correlations among different parts of the system, that are experimentally observed are left out):

Downward Causation without Foundations
http://michel.bitbol.pagesperso-orange.fr/DownwardCausationDraft.pdf
Hi Bohm. I glaced through Bitbol's paper and it seems like another of his I'd read entitiled "Ontology, Matter and Emergence" where he seems to try to blur the concepts of upward versus downward causation. This attempt to blur upwards versus downwards causations always bothered me regarding his work. So since you seem familiar with it, I'd like your opinion of what he's saying.

His discussion on the non-separability of quantum systems that you picked out is interesting though. Does Bitbol recognize and acknowledge the separability of classical systems? By separability I'm referring to what I would consider the very mainstream view as described by Karakostas "Forms of Quantum Nonseparability and Related Philosophical Consequences" for example:
The foregoing concise analysis delimits the fact, upon which the whole classical physics is founded, that any compound physical system of a classical universe can be conceive of as consisting of separable, distint parts interacting by means of forces, which are encoded in the Hamiltonian function of the overall system, and that, if the full Hamiltonian is known, maximal knowedge of the values of the physical quantities pertaining to each of these parts yields an exhaustive knowledge of the whole compound system. In other words, classical physics obeys a separability principal that can be expressed schematically as follows:

Separability Principal: The states of any spatio-temporally separated subsystems S1, S2, ... SN of a coumpound system S are individually well defined and the states of the compound system are wholly and completely determined by them and their physical interactions including their spatio-temporal relations...

I see Bitbol refers to the concept of Humphreys regarding the fusion of properties and seems to limit this fusion of properties to non-separable QM systems which seems very reasonable.

Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false? Is Bedau limiting the concept of downward causation (or "fusion" of properties) to non-separable systems?
 
  • #219
ThomasT said:
Doesn't the word spontaneous refer to unpredictable events? Aren't they called spontaneous because we can't predict or precisely explain them? Nevertheless, even so called spontaneous events are differentiated wrt, and predictable within certain bounds, wrt certain antecedent conditions, even wrt quantum experiments.

Determinism is an assumption. If one assumes it, then lack of explanatory or predictive ability is due to ignorance.

Yes, that is the question here. We have the phenomenon - spontaneous events. Then we make our models about "what is really going on"

The usual model is reductionist. Because we "know" all events are fundamentally deterministic/local/atomistic/mechanical/monistic/etc, then spontaneity is really just an epistemic issue. We only lack the hidden detail of the micro-causes.

But logically, we can also take a Heraclitean or process view of reality. Determinism/etc is emergent. All is fundamentally flux and regularity arises as a restriction on this inherent dynamism.

So you can have the same observation - some spontaneous looking event - and explain it either as secretly deterministic or instead what happens due to a lack of constraints.

I realize you don't believe QM nonlocality is a genuine issue. But for those that do, a systems view of spontaneous events now fits the evidence much better.

Bitbol's paper is an example of that approach.
 
  • #220
Q_Goest said:
Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false? Is Bedau limiting the concept of downward causation (or "fusion" of properties) to non-separable systems?
Yes, I think he would agree with Bedau. I interpreted him as arguing that higher-level facts or events constrain/modify or form a context for the lower level stuff, so they are not independent of the higher level, so the whole notion of upward/downward causation is misconceived especially since he also argues that while lower-level facts or events are necessary they aren't sufficient for higher-level ones. I think Bedau takes the same position? So it's all contextual. He does offer an interesting argument against panpsychism in this paper below and argues that his model can circumvent the mind-body problem without leading to panpsychism but I don't understand his argument. Maybe someone who does can explain it to me?
This possibility of “downward causation” from experience to physiology could be taken by some as mere evidence that conscious experience emerges from a neurophysiological basis in the “strongest” sense of the concept of emergence, as opposed to “weak” emergence (Bedau, 1997). But suppose we add a further constraint. Suppose we adopt a very strict criterion of emergence. Galen Strawson recently submitted this type of criterion : “For any feature of E (anything that is considered to be Emergent from the Basis B), there must be something about B and B alone in virtue of which E emerges, and which is sufficient for E” (Strawson, 2006). The problem is that, as I have suggested more and more insistantly, there is nothing specific about functions, neural tissues, or molecular structures in virtue of which conscious experience should emerge. Any loose talk of emergence of consciousness from brain processes in the name of mere correlations, or even mere experiments of mutual triggering, then appears to be ruled out by this strong criterion. From the latter negative statements, Strawson infers that conscious experience is nothing emergent at all. Combining this inference with a materialistic monistic principle, he concludes in favor of panpsychism, or rather pan-experientialism. But, then, his problem is to explain how micro-experiences “add up” to full-fledged human consciousness. Moreover, it is not easier to understand why and how an atom has elementary experience than to understand why and how a living human brain has an elaborated consciousness. Ascribing micro-experiences to atoms just seems an ad hoc additional postulate about matter. So, at this point, we are still completely stuck, with no idea whatsoever about how to handle the “hard problem” of the origin of conscious experience in an objectified nature.
Is Consciousness primary?
http://philsci-archive.pitt.edu/4007/1/ConsciousnessPrimaryArt2.pdf
 
  • #221
apeiron said:
The usual model is reductionist. Because we "know" all events are fundamentally deterministic/local/atomistic/mechanical/monistic/etc, then spontaneity is really just an epistemic issue. We only lack the hidden detail of the micro-causes.
This is the way I currently view it. With the reduction being toward ever more general, ie., fundamental, dynamics. In this view, the microcosmos isn't any more fundamental than the macrocosmos, because ontology isn't what's fundamental. Dynamics is.

apeiron said:
But logically, we can also take a Heraclitean or process view of reality. Determinism/etc is emergent. All is fundamentally flux and regularity arises as a restriction on this inherent dynamism.
If there's an inherent or fundamental dynamic, then determinism is fundamental, not emergent.

apeiron said:
I realize you don't believe QM nonlocality is a genuine issue.
I think it depends on how one defines quantum nonlocality.

apeiron said:
But for those that do, a systems view of spontaneous events now fits the evidence much better.
There's absolutely no physical evidence supporting the assumption of nonlocality. It's just based on interpretation. Anyway, I am an admitted ignorant pedestrian wrt these considerations, but I don't think it should be considered a great mystery that we can't predict certain phenomena. What is actually happening in the underlying reality is, of course, a mystery. But the fact that modern science, barely 100 years old, still can't predict lots of stuff isn't, to me, very surprising or even important.

That is, I don't think that some complicated systems view, or whatever, is necessary to establish the fact of our relative ignorance wrt a definitive qualitative apprehension of underlying ontologies and processes.

And thanks for your, and others', feedback to what must seem like a very simplistic view and questions ... from me.
 
Last edited:
  • #222
Q_Goest said:
Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false?

No, Bitbol politely makes it clear he rejects Bedau's approach.

But at the end of the day, according to its best supporter, it appears that “weak emergence is (nothing but) a proper subset of nominal emergence”...

...No genuine (upward or downward) causation, and ontologically little more than nominal emergence: This is the disappointing outcome of research developed under a substantialist construal of the basic elements.

Is there any alternative left? I think there is, provided the substantialist presupposition is dropped at every single level of description.

So the problem with Bedau's analysis - why it is a straw man argument - is that the systems view takes the micro-level as also to be emergent. Substantial cause (material/efficient cause) is itself something that arises and is not fundamental.

There just is no local ontic stuff that has intrinsic properties as Bedau assumes. Yes we can model reality in this fashion for epistemic convenience. But what is really going on is that downwards constraints are limiting local degrees of freedom so as to shape up a realm of local actions.

Bedau does not even seem to understand that this is the systems viewpoint. So his whole argument is based on a wrong analysis. It is quite irrelevant to the modelling of strong emergence. Bitbol by contrast begins by treating local material causes as downwardly emergent.

Setting aside any conceptual trick such as hidden variables, the so-called elementary particles have to be treated as non-individuals, as mere units of a limited set of “sorts,” and thus as formal rather than substantial entities. This becomes even clearer in Quantum Field Theory, where cardinals of subsets of particles are in one-to-one correspondence with quantized modes of excitation of fields (Teller 1995). Accordingly, particles are de facto treated as patterns or configurations, rather than as substantial entities (Bickhard and Campbell 2000). The analysis of a level of organization in terms of structures, patterns, and topological configurations imposes itself throughout the scale of levels, even at the lower accessible level (Campbell and Bickard 2009). The first asymmetry of the standard picture disappears thus.
 
  • #223
bohm2 said:
Yes, I think he would agree with Bedau. I interpreted him as arguing that higher-level facts or events constrain/modify or form a context for the lower level stuff, so they are not independent of the higher level, so the whole notion of upward/downward causation is misconceived especially since he also argues that while lower-level facts or events are necessary they aren't sufficient for higher-level ones. I think Bedau takes the same position?
If he suggests that the lower level facts are insufficient to define higher levels, then at least if he's talking about classical physics, then he would disagree with Bedau. I would agree with apeiron, Bitbol probably rejects what Bedau has argued regarding weak emergence. I'm just not sure. He certainly notes the nonseparability of quantum mechanical systems and his argument regarding the 'fusion' of properties seems on par with other well accepted concepts of quantum mechanics. I just don't see a clearly defined treatment in his paper of the separability of classical physics.

If Bitbol wants to suggest that lower level facts are necessary but insufficient to define higher level facts, then at least for classical phenomena such as weather systems, the N body problem, etc... he would also need to reject the separability principal. See for example Kronz "Emergence and Quantum Mechanics". But the separability principal is clearly correct. It is used to derive all manner of higher level laws and is taught in one form or another in all college and universitiy courses on classical physics. Classical physics is separable.
 
  • #224
apeiron said:
Bitbol by contrast begins by treating local material causes as downwardly emergent.
Setting aside any conceptual trick such as hidden variables, the so-called elementary particles have to be treated as non-individuals, as mere units of a limited set of “sorts,” and thus as formal rather than substantial entities. This becomes even clearer in Quantum Field Theory, where cardinals of subsets of particles are in one-to-one correspondence with quantized modes of excitation of fields (Teller 1995). Accordingly, particles are de facto treated as patterns or configurations, rather than as substantial entities (Bickhard and Campbell 2000). The analysis of a level of organization in terms of structures, patterns, and topological configurations imposes itself throughout the scale of levels, even at the lower accessible level (Campbell and Bickard 2009). The first asymmetry of the standard picture disappears thus.
Bitbol is clearly talking about quantum mechanics here which is non-separable. I still don't see Bitbol making a distinction between separability and non seprable systems.

Also, the term "constraint" is used in physics and science every day, and it means just what you say, "constraints are limiting local degrees of freedom so as to shape up a realm of local actions." A simple example: we create a free body diagram of something, and those constraints are the known local efficient causal actions acting at some point on the boundary which provide the necessary knowledge about the system as it is exposed to other elements in the world. Sperry's classic wheel rolling down a hill can be used as an example, where the wheel interacts with the ground, there is a constraint or boundary condition so we know how the wheel interacts with the ground. Similarly, a point inside the wheel is constrained by it's location in the wheel and the geometry of the wheel. Constraints don't pose a downward control over anything, they are simply locally efficient causes. At least, that's the concept as taught in college and university which is clearly not downward causation.
 
  • #225
Q_Goest said:
Bitbol is clearly talking about quantum mechanics here which is non-separable. I still don't see Bitbol making a distinction between separability and non seprable systems.

Also, the term "constraint" is used in physics and science every day, and it means just what you say, "constraints are limiting local degrees of freedom so as to shape up a realm of local actions." A simple example: we create a free body diagram of something, and those constraints are the known local efficient causal actions acting at some point on the boundary which provide the necessary knowledge about the system as it is exposed to other elements in the world. Sperry's classic wheel rolling down a hill can be used as an example, where the wheel interacts with the ground, there is a constraint or boundary condition so we know how the wheel interacts with the ground. Similarly, a point inside the wheel is constrained by it's location in the wheel and the geometry of the wheel. Constraints don't pose a downward control over anything, they are simply locally efficient causes. At least, that's the concept as taught in college and university which is clearly not downward causation.
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
 
  • #226
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
I'm not sure what you mean by that. I know some folks such as Alwyn Scott ("Reductionism Revisited") for example, try to suggest that these constraints are more than locally efficient causes - that because of the system being nonlinear and highly dynamic, Scott contends that these 'constraints' actually interact with a given system such that they influence what occurs locally.

Although Scott doesn't use some of the common terminology used in fluid dynamics and thermodynamics, he does discuss such phenomena at length. Two terms I'd like to introduce which are common to the fluid systems and thermodynamic systems Scott discusses are:
  • "control volume" which is a volume of space within which something happens
  • "control surface" which is the two dimensional surface surrounding the control volume.
Scott suggests that for a given control volume within a nonlinear system, the locally efficient causes acting across the control surface might result in there being more than one possible outcome within the control volume. Scott contends that what happens nonlocally will affect the outcome within that control volume. Scott refers to "constraints" in the same way apeiron uses the term. Scott claims it is these constraints which force the control volume to come up 'heads' in one case and 'tails' in another so to speak and what causes it to come up heads or tails doesn't depend on locally efficient causes acting at the control surface. Scott's mantra is "nonlinear phenomena are those for which the whole is greater than the sum of its parts".

I actually think Scott's paper is a good one for understanding what might possibly be construed as downward causation because it's much more clearly written I think, than stuff by philosophers such as Bitbol. At least Scott is a scientist and brings that with him in his writing. Nevertheless, I disagree with Scott's conclusions since they would require the nonseparability of classical physics, his views disagree with mainstream science on local causation, and he also misquotes Emmeche regarding downward causation. So there are a lot of problems with his paper and it isn't highly referenced.
 
  • #227
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
Q_Goest said:
I'm not sure what you mean by that.
What is it that you're not sure about the meaning of?
 
  • #228
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
What is it that you're not sure about the meaning of?
Evolution: I assume you mean the time evolution of a system?
What is a dynamic and what makes a dynamic fundamental?

Do you mean: Constraints are limitations imposed on a system? Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
 
  • #229
Q_Goest said:
Evolution: I assume you mean the time evolution of a system?
What is a dynamic and what makes a dynamic fundamental?
For example, the propagation of a disturbance in a medium. Any medium. It seems to me that there is a general quality wrt this that holds for all disturbances in all media wrt all scales. There's a simple mathematical equation that describes this. And, I wonder if this might be a fundamental dynamic.

Q_Goest said:
Do you mean: Constraints are limitations imposed on a system? Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
I mean that any constraints that arise and exist, ie., emergent and bounded, ie., persistent systems/ojects, are a function of inumerable iterations of, and adherence to, the fundamental wave dynamic.
 
  • #230
I don’t know if I’d call the propogation of a disturbance in a medium a “fundamental dynamic” but certainly I understand what you mean. Propogation of pressure waves in a fluid (liquid or gas) or the propogation of sound waves through solid objects are well understood and the propogation of a wave of some kind through a medium seems to be fundamental with the exception of radiation which propogates through a vacuum. So if something propogates without the need for a medium, would we say that propogation through a medium is a dynamic but not a fundamental one?

Regardless, the disturbance and subsequent propogation in a medium and the equations that we use to describe the complex wave interactions that are set up fall into the category of weak emergence as defined by Bedau for example. For there to be some kind of downward causation, we would need to say that the propogation of causes is necessary but insufficient to determine how that propogation persists in that medium. Robert Bishop (apeiron pointed to his paper earlier) for example, suggests that the propogation of intermolecular forces, including heat transfer and gravitational fields, are necessary but insufficient to describe Rayleigh Benard convection, especially Benard cells. The constraining condition for this type of convection has a temperature or heat flux on two sets of parallel plates with a fluid between them. The constraints then must somehow add to the phenomena in a way which is above and beyond what can be described assuming only local intermolecular forces are causing the cells to set up. Clearly, we don’t need to add anything else to study the phenomena and Bishop doesn’t offer any suggestion as to what should be added. So Bishop for example, would disagree with you that the “constraints that arise and exist, ie., emergent and bounded, ie., persistent systems/objects, are a function of inumerable iterations of, and adherence to, the fundamental wave dynamic” because Bishop would say the wave dynamic is necessary but insufficient to determine the phenomena of Rayleigh Benard convection. Clearly, Bishop is out on a limb with this.
 
  • #231
Q_Goest said:
But the separability principal is clearly correct. It is used to derive all manner of higher level laws and is taught in one form or another in all college and universitiy courses on classical physics. Classical physics is separable.

The separability issue here comes at the level of models of causality itself, not simply state descriptions of objects.

So the systems claim is that objects separate into local and global kinds of cause. You have upward constructing degrees of freedom and downward forming global constraints.

The matching reductionist claim would be that there is only upwardly constructing degrees of freedom. Everything reduces to material/efficient cause. So all that happens at higher levels is the emergence of larger scales of material causality - new kinds of composite substances with their new kinds of properties (like liquidity).

In hierarchy theory, the difference is reflected in the distinction between subsumptive and compositional hierarchies - http://en.wikipedia.org/wiki/Hierarchy#Containment_hierarchy

Classical physics deals with the systems aspect of causality in an opaque way. It separates what is going on into initiating conditions and dynamical laws. So you have the local atoms and their global organising constraints. But both these necessary aspects of a system just "exist" in unexplained fashion. They don't develop into a persisting relationship, as argued by a systems view based on an interaction between what has become separated.

At the "fundamental" level, the classical view seems to treat both initiating conditions and laws as ontic - the fundamental laws of nature really exist (somewhere, in the mind of god perhaps, or Platonically). But then taking the compositional view of hierarchical complexity, the initiating conditions and dynamic laws for higher levels of objects becomes merely epistemic - a convenient descriptive impression rather than a reality.

A systems view is instead - according to some current developments - pansemiotic because it argues that even the fundamental level of existence is "ontically epistemic", in the sense used by Peirce and others (Bitbol, for instance, is developing on the autopoiesis of Varela). Nothing actually "exists". Everything has to emerge via development (that is via the hierarchical separation that allows dichotomistic interaction).

The further levels of organisation then occur via subsumptive interpolation - they develop nested within the whole, not constructed as another level upon some prior foundation.

Again, this shows how in the systems view, things are certainly separated (dichotomised) and yet also inseparable (they continue to dynamically interact).

Classical physics works because the universe has grown so large and cold as a system that its scales have become semiotically isolate. An atom is so different in scale from what it constructs that its emergent nature becomes a coarse grain irrelevance.

Perturbations that cross these chasms of scale are always possible. An atom can spontaneously decay. Which could be a surprise to what it forms. It could disrupt some biological object like a microtubule. Likewise, a cosmic ray might strike from the other direction of scale.

But generally, the differences in scale are so great that the classical view can treat a system like a microtubule as being composed of solid material/efficient causes - and in equally uncomplicated fashion, decomposable back into them. Any laws created to describe microtubule behaviour are just emergent descriptions, epistemic glosses, so can be discarded at no cost to the ultimate laws of nature. Nothing valuable is being chucked out.

So there are two models of nature, and of causality here. One to deal more simply with developed systems (like a cold, large, old universe). But a broader view of causality, of nature, is needed to talk about the development of systems - such as we were doing here with the OP, the emergence of the universe as an "object".

Classical physics is already known to break down at the extremes of scale - the global scale of GR and the local scale of QM. So I don't see why we should be constrained by classical notions of causality in this discussion.
 
  • #232
"Propagation of a wave through a medium" is maybe too specific, but the basic concept of energy transfer is at the root of all dynamics. I suppose in the case of electromagnetic radiation, one could argue that space is the medium.
 
  • #233
Q_Goest said:
If he suggests that the lower level facts are insufficient to define higher levels, then at least if he's talking about classical physics, then he would disagree with Bedau. I would agree with apeiron, Bitbol probably rejects what Bedau has argued regarding weak emergence.

Sorry, you guys are right. I should have read the Bedau paper more closely.
 
  • #234
ThomasT said:
This is the way I currently view it. With the reduction being toward ever more general, ie., fundamental, dynamics. In this view, the microcosmos isn't any more fundamental than the macrocosmos, because ontology isn't what's fundamental. Dynamics is.

If there's an inherent or fundamental dynamic, then determinism is fundamental, not emergent.

I think it depends on how one defines quantum nonlocality.

There's absolutely no physical evidence supporting the assumption of nonlocality. It's just based on interpretation. Anyway, I am an admitted ignorant pedestrian wrt these considerations, but I don't think it should be considered a great mystery that we can't predict certain phenomena. What is actually happening in the underlying reality is, of course, a mystery. But the fact that modern science, barely 100 years old, still can't predict lots of stuff isn't, to me, very surprising or even important.

That is, I don't think that some complicated systems view, or whatever, is necessary to establish the fact of our relative ignorance wrt a definitive qualitative apprehension of underlying ontologies and processes.

And thanks for your, and others', feedback to what must seem like a very simplistic view and questions ... from me.

Ignorant pedestrian ? Simplistic ? At least you are able to offer your view, and it is readily understood - not wrapped in ever increasing cycles of complexity that gets no one any closer to anything of substance . Oh, and BTW, an admission of ignorance puts you way ahead than some others.

Any intelligent fool can make things bigger, more complex.
It takes a touch of genius - and a lot of courage - to move in the opposite direction.
(Albert Einstein)
 
  • #235
Q_Goest said:
Evolution: I assume you mean the time evolution of a system?
Yes.
Q_Goest said:
What is a dynamic ...
Eg., an expanding wavefront/waveshell. Disturbances tend to move omnidirectionally away from their source unless somehow constrained.
Q_Goest said:
... and what makes a dynamic fundamental?
Some behavioral characteristic that's operational on the very largest to the very smallest scale. A dynamic that pervades and permeates the whole of reality.

Q_Goest said:
Do you mean: Constraints are limitations imposed on a system?
The dynamic, eg. an expanding wavefront/waveshell -- ie., the tendency for disturbances to move omnidirectionally away from their source, defines certain constraints or limitations on possible degrees of freedom. Then there's the consideration of the topology of the medium in which the disturbance is propagating, which entails more constraints. Then there's the consideration of interaction with other disturbances, which entails more constraints. Then there's the emergence of more or less persistent, bounded complex wave structures (ie., particulate matter), and the emergence of a hierarchy of particulate media.

And wrt all media, the tendency for disturbances to propagate omnidirectionally (ie., in the 'direction' of a presumed universal isotropic expansion), is evident.

Q_Goest said:
Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
I'll have to look this up.
 
  • #236
Q_Goest said:
I don’t know if I’d call the propogation of a disturbance in a medium a “fundamental dynamic” but certainly I understand what you mean.
Suppose it's refined to refer to the tendency of all disturbances to propagate omnidirectionally away from their source?

Q_Goest said:
... if something propogates without the need for a medium, would we say that propogation through a medium is a dynamic but not a fundamental one?
I'm not aware of any mediumless propagation. That is, while I realize that, eg., no electromagnetic medium has been detected, it nonetheless seems, from the observation that em radiation seems to behave very much like radiating wavefronts in water and air, more reasonable to assume that light is propagating in some sort of medium, as opposed to propagating in ... nothing.

Wrt your question, refer to the preceding statement. Does the tendency of disturbances to propagate omnidirectionally away from their sources seem like it might be called a fundamental dynamic?

Q_Goest said:
Regardless, the disturbance and subsequent propogation in a medium and the equations that we use to describe the complex wave interactions that are set up fall into the category of weak emergence as defined by Bedau for example.
Yes, any interactions, topological anomalies, etc. would entail emergent (higher order systems, scale/regime specific interactional/organizational rules, ie., scale/regime specific dynamical tendencies) behavior. But the fundamental dynamic, the tendency for any disturbance in any medium to propagate omnidirectionally away from its source, would still ultimately determine this. Which is not to say that the behavior of emergent systems could ever actually be calculated via this fundamental dynamic. It's a, more or less, metaphysical view, an unfalsifiable assumption. But, nonetheless, one based on observation of the way the world actually behaves. And assuming it would solve a few otherwise somewhat perplexing problems ... such as the arrow of time, the apparent very large scale isotropic expansion of the universe, etc.

As to whether such an assumption would answer the question "why anything at all?", I don't think so, because it leaves open the question of "why is there a fundamental dynamic?".

I have to look up what you said about Bishop and Rayleigh Benard convection, etc.
 
Last edited:
  • #237
By the way, I apologize for my questions/statements in that I don't want to derail the ongoing discussion -- which discussion would certainly facilitate improvement wrt one's command of concepts and considerations that it has involved.
 
  • #238
wuliheron said:
If you ask a Zen master why there is something rather then nothing he might hit you over the head with a stick.

Seems you may have been right on that! :rolleyes:

Then on the street in Greenwich Village, I ran into a Zen Buddhist scholar who had been introduced to me once at a cocktail party as an authority on mystical matters. After a little chitchat, I asked him -- perhaps, in retrospect, a bit precipitately -- why there is something rather than nothing. He tried to bop me on the head. He must have thought it was a Zen koan.

http://dbanach.com/holt.htm
 
  • #239
The "why anything" question has extra force if cosmology can show the universe/multiverse/whatever in fact had a beginning. If reality was simply past-eternal, there would be more reason to shrug a shoulder over its "cause". But if reality once "wasn't" in some scientifically-supported sense, then the "why anything" question obviously becomes more pressing.

Alex Vilenkin continues to pursue the relevants proofs to show reality (at least in the crisply developed way we we know it) can't be past-eternal. The New Scientist covered (pay-walled) his most recent talk - http://www.newscientist.com/article/mg21328474.400-why-physicists-cant-avoid-a-creation-event.html

But anyway here is a summary...

Vilenkin discussed three models for an eternal universe in his presentation, describing why each cannot deliver on what it promises. The first is Alan Guth’s eternal inflation model which proposes eternally inflating bubble universes within a multiverse that stretches both forward and backward in time. In 2003 Vilenkin and Guth discovered that the math for this model will not work because it violates the Hubble constant. Speaking of the inflationary multiverse, Vilenkin said ―it can’t possibly be eternal in the past,‖ and that ―there must be some kind of boundary.

The second cosmological model was the cyclical model, which proposes that the universe goes through an eternal series of contractions and expansions – our Big Bang being the latest contraction in an eternal series. Vilenkin shows that this model cannot extend infinitely into the past either because disorder would accumulate with each cycle. If the universe has been going through this process eternally, we should find ourselves in a universe that is completely disordered and dead. We do not, hence a cyclical universe cannot extend infinitely into the past.

The final cosmological model Vilenkin deconstructed is the cosmic egg model. On this model the universe exists eternally in a steady state, but then it ―cracked‖ resulting in the Big Bang. The problem with this model is that quantum instabilities would not allow the ―egg to remain in a steady state for an infinite amount of time. It would be forced to collapse after a finite amount of time, and thus cannot be eternal.

And here are two of those papers...

http://arxiv.org/pdf/gr-qc/0110012v2.pdf
http://arxiv.org/pdf/1110.4096v4.pdf
 
  • #240
apeiron said:
The "why anything" question has extra force if cosmology can show the universe/multiverse/whatever in fact had a beginning.
I don't see how it can ever, definitively, show this. It seems to me that this sort of consideration is always going to depend on unfalsifiable assumptions. Not to say that they might not be very good assumptions based on all the currently available evidence, but unfalsifiable nonetheless.

But I do very much like your statement, in a previous post, that considering/discussing the thread question can have lots of positive effects wrt the depth and breadth, the sophistication, of the concepts held and presented by those doing the considering/discussing.

apeiron said:
If reality was simply past-eternal, there would be more reason to shrug a shoulder over its "cause". But if reality once "wasn't" in some scientifically-supported sense, then the "why anything" question obviously becomes more pressing.
Well, yes. But I don't see how science can ever support or falsify the assumption that before some prior time there wasn't ... anything.

apeiron said:
Alex Vilenkin continues to pursue the relevant proofs to show reality (at least in the crisply developed way we we know it) can't be past-eternal. The New Scientist covered (pay-walled) his most recent talk - http://www.newscientist.com/article/mg21328474.400-why-physicists-cant-avoid-a-creation-event.html

http://arxiv.org/pdf/gr-qc/0110012v2.pdf
http://arxiv.org/pdf/1110.4096v4.pdf
Thanks for the links.
 
  • #241
alt said:
Ok - let's go with 'intent or lack of intent', and extrapolate your examples.[...]
Both cases are legitimate. Intent from lack of intent. Lack of intent from intent.

If you accept the scientific position, can you really ascribe any special quality to your intentions ? Are they not a result of natural forces - merely an extension of the same principals that govern the raindrop causing the pebble to fall, causing the avalanche ?
You are here mixing causation and intention. (I state what we both understand for clarity).
I see no paradox nor contradiction here. Indeed for intention to manifest and have meaning one's actions need to be able to cause the effect which is the intended goal... at least in so far as it can significantly increase the likelihood of the desired outcome. Indeed for will to exist and have meaning there must be a mechanism of observation, modeling of cause and effect to predict, and power to act.

But there is a part of your examples which I think misses the mark. A spontaneous event may trigger the activity of an intention but the intention may previously exist. The rescue squad were trained and prepared and positions before the avalanche occurred. One may argue that the intention preceded the instigating trigger. Intent needn't invoke omnipotence and must if it is to be actualized account for and react to circumstance.
Is there something special about our intentions ? Intelligence perhaps ? Caused by an unintentional, unintelligent Big Bang (no God) ? Or was it intelligent and intentional (God) ? Or don't we know, are not sure ? Bringing it back to the ultimate question - why anything at all.
Yes intent requires some form of "intelligence" in so far as it must invoke expectations of effects of acts. It is an emergent property of living organisms. Now we can speak loosely of intent on a somewhat lower level and get into a very grey area. We often speak of the purpose of say the shape of a finch's beak or some other genetic characteristic of an organism. Here we are at a level of "quasi-intent" where there is no mind (one may assume for arguments sake) behind the design but there is information processing in the biology of genetic reproduction and evolution. The beak shape is in one sense accidental an in another sense purposeful. We need a distinction in the language to handle this level. Say "quasi-purpose" and "quasi-intent".

It is instructive to look at the thermodynamic environment in which we see life existing. We have Earth sitting with a high temperature sun nearby and a low temperature universe into which to radiate. We thus have a large flux of (Helmholtz) free energy through the system. This allows the emergence of spontaneous self organizing systems. It feeds heat engines which power refrigeration effects (formation of intricate crystalline structures, distillations of fresh water, chemical separation of elements, salt flats and ore deposits, ...)

Self organizing systems have an emergent causal structure. In the presence of free-energy flux they cause replication of their organized structure. No intent here but a different level of description for cause and effect. We see growth of crystals and quasi-crystals, propagation of defects in these, and similar condensed matter phenomena.

It is not so much as a specific organized outcome is caused as that over time and many random accidental effects, those which further the organization, are selected out as more resilient against reversal. (the clump of atoms which accidentally land in alignment with the crystalline structure are less likely to re-dissolve by better transmitting heat into the crystal and down to the cold point where it began to form.)

Within this sea of self organizing systems one presumes organisms emerged able to encode and replicate information about how it behaves physically. Now one has a new level of causation where the genetic structure causes the behavior and the behavior is selected for survival. One has "quasi-purpose" and "quasi-intent" in the form of selection from large numbers of variation for most favorable traits. It is the proverbial billions of monkeys tapping on typewriters except that those who fail to type something sensible get culled.

There are two more points of emergence, the first brings about intentional purposeful behavior. From flatworms to lions, tigers, and bears you have an organ dedicated to perception of the environment and triggering actions base on environmental cues. You have a rudimentary mind which encodes not just behavior but perception. In there somewhere must be a modeling function adapting a predictive mechanism, i.e. learning and changing behavior based on experience. These entities can be said to hold intent. The lion is indeed trying to eat me and the flatworm is in fact intending to move and find food.

At some level, possibly the lion, possibly only bigger brained animals such as primates and some others, possibly only the human mind, there is conscious intention. Instead of only learning cause and effect from our experience in a reactive way, we abstract and hypothesize constructing theories of how the world works and so extrapolating upon experience. I've certainly seen examples of parrots and chimps doing this but not universally, only specific trained examples. I suspect they are at the cusp where such emergent behavior is possible but exceptional among individuals.

(By the same token I've seen humans who seem incapable of anything other than reactive "animal" behavior.)

I personally think that the word 'accidental' and it's fluid use thereof, goes to the heart of the context, and the point (the OP) of this thread.
Hmmm... 'accidental' and also 'spontaneous' with some "accidental" confusion of the two meanings.

Identifying levels we may ask at what levels the meanings of words like "spontaneous" and "accidental" change their definition.

  1. Physics & Thermodynamics
  2. Chemistry & Condensed matter physics
  3. Self-organizing systems (specialized chemistry pre-biology, non-equilibrium thermo.?)
  4. Biology
  5. Behavioral (animal) Psychology
  6. Human Psychology/Philosophy of Thought (including epistemology, logic, etc and the philosophy of science including this list.)
Does that sound about right?
I'd say questions of intent and purpose don't have any meaning below the level of Biology and should be "quasi-" qualified at the level of biology. And then we can distinguish forms of intent at the last two levels e.g. the distinction between first and second degree murder and manslaughter. (conscious intent, reactive intent, no intent but responsibility for causation).

One may ask how 'spontaneous' is defined at the base level vs. 2nd and 3rd levels. In classical physics there is no 'spontaneous' and we have a clockwork determinism between past and future states of reality. Quantum mechanics modifies the issue a bit and there are arguments about interpretation but we can qualify e.g. spontaneous vs. stimulated emission. There is room for invoking the term and giving it meaning. Note however that at the next level spontaneous is quite distinctly meaningful. We can speak, even in the classical domain, of spontaneous reactions, such as condensation or :wink: spontaneous human combustion ;). We understand when speaking of this at the level of chemistry that we are speaking of random external causation and not the type of indeterminate causality invoked when considering quantum physics. It changes further at higher levels. Certain self organization phenomena are "inevitable" with spontaneous time of instigation. That's true even of critical phenomena in chemistry/condensed matter physics where phase changes are the rule and super-critical phases are exceptional.

This is how I see the meanings of the words parsed at different levels. Well I'm talked out and I've got to get ready for school. I apologize for being long winded.
 
  • #242
jambaugh said:
... I apologize for being long winded.

Not at all. Thank you for your informative and 'to the point' reply. I will read it with much interest and might have some further comments / questions later, if that's OK
 
  • #243
alt said:
Ignorant pedestrian ? Simplistic ? At least you are able to offer your view, and it is readily understood - not wrapped in ever increasing cycles of complexity that gets no one any closer to anything of substance . Oh, and BTW, an admission of ignorance puts you way ahead than some others.

Any intelligent fool can make things bigger, more complex.
It takes a touch of genius - and a lot of courage - to move in the opposite direction.
(Albert Einstein)
Thanks for the pep talk alt. :smile:

Truth be told, the reason I try so hard to simplify things is that I'm not capable of navigating through complexity. I'm a panicky guy. Keep it simple ... please. :smile:

And now I think I should just fade once again into the background and let the more informed members, you included, continue with the discussion.
 
  • #244
jambaugh said:
It is not so much as a specific organized outcome is caused as that over time and many random accidental effects, those which further the organization, are selected out as more resilient against reversal. (the clump of atoms which accidentally land in alignment with the crystalline structure are less likely to re-dissolve by better transmitting heat into the crystal and down to the cold point where it began to form.)

Descriptions of worlds constructed in purely bottom-up fashion are all very well, but they remain vulnerable to the realisation that worlds are fundamentally incomputable.

Here is a recent paper on the incomputability issue and its connection to the "why anything" question -

INCOMPUTABILITY IN NATURE Barry Cooper, Piergiorgio Odifreddiy
To what extent is incomputability relevant to the material Universe? We look
at ways in which this question might be answered, and the extent to which the
theory of computability, which grew out of the work of G¬odel, Church, Kleene
and Turing, can contribute to a clear resolution of the current confusion.
http://www1.maths.leeds.ac.uk/~pmt6sbc/preprints/co.pdf

(A gloss just appeared in Nature - http://www.nature.com/nature/journal/v482/n7386/full/482465a.html)

Cooper is talking about how systems self-organise out of vagueness and the need for a new view of mathematics to be able to model that. Maths is based on notions of definability and rigidity - the basis of reductionist computability - and yet we know this is an unreal idealisation (useful, sure enough, but metaphysically untrue).

I think Cooper offers another good way of looking at the question of the self-creation of the universe. We can say it is about the emergence of computability! In the beginning was vagueness - the incomputable. And then by way of a self-organising phase transition, this gave birth to all that was computable.

This is a very "material" or thermodynamic way at looking at maths. The usual approach to maths is immaterial - unconstrained by material limits. Like Bedau arguing for weak emergence, infinite computation is presumed. Big calculations are fine - even if they are so big that they would quickly bust the limits of any material attempts to compute them.

But many are starting now to object to this unrealistic view of computation - the kind that seem happy with non-linear terms that expand faster than the underlying computation that is hoping to keep up with them. If you presume infinite computational resources, then the distinction between polynomial time and exponential time just ceases to be a problem so far as you are concerned.

See these papers questioning such blythe reasoning...

Why Philosophers Should Care About Computational Complexity - Scott Aaronson
http://arxiv.org/PS_cache/arxiv/pdf/1108/1108.1791v3.pdf

The implications of a holographic universe for quantum information science and the nature of physical law - P.C.W. Davies
http://www.ctnsstars.org/conferences/papers/Holographic%20universe%20and%20information.pdf

Some mathematical biologists have been arguing this for a long time of course...

https://www.amazon.com/dp/023110510X/?tag=pfamazon01-20

But Cooper shows how mathematicians are facing up again to the deep issue of incomputability and its implications for how we even conceive reality (and its origination).

On the incomputability of global constraints...

At the same time, new science is often based on situations where the traditional reductions are no longer adequate (chaos theory being particularly relevant here). As one observes a rushing stream, one is aware that the dynamics of the individual units of flow are well understood. But the relationship between this and the continually evolving forms manifest in the streams surface is not just too complex to analyse Ñ it seems to depend on globally emerging relationships not derivable from the local analysis.

On the common trick of simply assuming the incomputability of vagueness to be computable "somehow" - given infinite material resources...

Quantum indeterminacy presents little problem for such an outlook. One either expects an improved scientific description of the Universe in more classical terms, or, more commonly, one takes quantum randomness as a given, and superimposes more traditional certainties on top of that.

The latter perspective is also common to world views that make no assumptions about discreteness. It has the advantage (for the Laplacian in quantum clothing) of incorporating incomputability in the particular form of randomness, without any need for any theory of incomputability. The origins of incomputability in mathematics may be theoretical, but not in the real world, the view is.

On computability acting as a downward constraint on incomputability so as to produce a "well-formed" universe...

Our basic premise, nothing new philosophically, is that existence takes the most general form allowed by considerations of internal consistency. Where that consistency is governed by the mathematics of the universe within which that existence has a meaning.

The mathematics leads to other scientiÞcally appropriate predictions. In particular, there is the question of how the laws of nature immanently arise, how they collapse near the big bang singularity, and what the model says about the occurrence or otherwise of such a singularity.

What we have in the Turing universe are not just invariant individuals, but a rich infrastructure of more general Turing definable relations. These relations grow out of the structure, and constrain it, in much the same sort of organic way that the forms observable in our rushing stream appear to. These relations operate at a universal level.

The similarities of Cooper's arguments with those of Peirce, or the pre-geometry of Wheeler, are obvious. But the computability question, coupled with the emerging information theoretic view of reality that we see both in holographic approaches to cosmology and dissipative structure approaches in material descriptions generally, offer a new paradigm for tackling the "why anything" question.
 
Last edited by a moderator:
  • #246
ThomasT said:
" ... the past of an inflationary model is a matter of speculation ..."

...and Vilenkin et al are offering tighter constraints on that speculation.

So if you want to argue that the universe/multiverse is past-eternal, you now have to give arguments against the reasonableness of their averaged expansion condition.
 
  • #247
ThomasT said:
Thanks for the pep talk alt. :smile:

Pep talk ? Was just trying to bring the conversation down to my level :-)

Truth be told, the reason I try so hard to simplify things is that I'm not capable of navigating through complexity. I'm a panicky guy. Keep it simple ... please. :smile:

'bout the same here - except that I wouldn't call myself panicky.

And now I think I should just fade once again into the background and let the more informed members, you included, continue with the discussion.

Me ? Informed ? Lol :-)

I feel like fading into the background all the time, but I'm informed with a propensity to ask the odd question. These couple of lines from Oliver Goldsmith ring in my ears occasionally;

Deign on the passing world to turn your eyes
And pause a while, from letters to be wise ..
 
Last edited:
  • #248
Oh, and BTW Thomas, earlier on, you said ..

Some behavioral characteristic that's operational on the very largest to the very smallest scale. A dynamic that pervades and permeates the whole of reality.

Can you expand on that at all ?
 
  • #249
apeiron said:
...and Vilenkin et al are offering tighter constraints on that speculation.

So if you want to argue that the universe/multiverse is past-eternal, you now have to give arguments against the reasonableness of their averaged expansion condition.
A good point, imho. Sorting the out the most reasonable constructions wrt extant physical evidence and standard logic is a formidable task ... which supports your point that consideration of the OP should probably facilitate the emergence of more sophisticated answers to the question, even if no definitive ones ... and in the process maybe better ways of thinking about our world, our universe, emerge.
 
  • #250
alt said:
Oh, and BTW Thomas, earlier on, you said ..

Some behavioral characteristic that's operational on the very largest to the very smallest scale. A dynamic that pervades and permeates the whole of reality.

Can you expand on that at all ?
I did that in a couple of previous posts. I don't want to hijack the thread. If you can't find the relevant posts, then PM me and we can hash it out.
 
Back
Top