Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Summing not over configurations, but over theories

  1. May 22, 2018 #1
    A not very well defined question:

    Path integrals (and generalizations) are sums over configurations. A logical extension of that process would be to sum not over configurations, but over theories (configurations are possible solutions of a single theory).

    Renormalization already plays around the "space of theories" (with changing parameters), but AFAIK, this path is not pursued much.

    My (very basic) thinking goes alongside the central limit theorem: summing many realizations of uncertain distributions gives something very definite (close to a Gaussian distribution). So could the summation over many realizations of uncertain distributions (all possible theories) give something definite (the theory we live in, given some measurements)?
     
  2. jcsd
  3. May 22, 2018 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    It isn't clear what an "uncertain distribution" would be. Perhaps you are thinking of a random variable that is a probability distribution over a space of probability distributions.

    Try giving a specific example of that concept - an example using specific probability spaces.
     
  4. May 22, 2018 #3
  5. May 24, 2018 #4
    I was thinking in the analogous case to spin glass or random field ising model: in addition to the probability space of the spins, there is a probability space for the parameters (either the coupling constant J or the magnetic field H).

    The phase transitions of these models are richer than the ones of the Ising model (only probability space of the spins).

    In the same way the second law of Newton is recovered from quantum mechanics as the first order in the expectation value of the Schroedinger equation, couldn't it be the fundamental laws we have so far are only "first order averages" of averaging over many (maybe all) equations?

    I know what I say is highly speculative. But it is the only way out I find to the question why there is a law describing the universe. This is analogous to the question why the distribution of heights in a population is close to a normal distribution. One possibility is "magic" (GR and the Standard Model are "magic"). Another possibility is there is some dynamical reason for that (the central limit theorem).
     
  6. May 29, 2018 #5
    Another (speculative) comment: if the parameters of a Lagrangian were not fixed numbers, but probability distributions (and the theory sums not only on fields, but on the probability distributions of the parameters), maybe that could be an additional method to regularize a theory.

    If the probability distribution were a Dirac delta, we would recover a standard theory. But if the probability distribution were a (thin) normal, maybe this effect would result in a regularized theory?

    In fact, this leads to another question: if the probability distribution is a (thin) normal, and we sum over "everything", the resulting "theory" can be understood as a standard theory (i.e. a Lagrangian with fixed parameters)?
     
  7. May 29, 2018 #6
    I don't know if this is related to what you really have in mind, but your posts made me remember of Dyson's interpretation of random matrices ensembles, outlined here: https://aip.scitation.org/doi/abs/10.1063/1.1703773

    Basically, he considers random matrices enembles as ensembles of some unknown objects which are related to different hamiltonians by certain conditions, that is, he has some kind of ensemble of different systems which may be described by different hamiltonians, and then he tries to compute things out of this interpretation.
     
  8. May 31, 2018 #7
    The Lagrangian parameters are already not fixed in the RG context. You have your coupling constants as a function of energy scale. The difficulty seems to be when moving to a region where the real world would require terms that we didn't consider in the Lagrangian. So if your theory starts diverging you add the terms that would fix that, for example. But if we are moving between theories we should consider the possibility that there are models that can't be described by Lagrangians, regardless of how many terms you have. My point is, any method for moving between theories seems highly dependent on the theories in question.

    This kind of speculative discussion is fun but I honestly don't know if what I just said even makes sense.
     
  9. May 31, 2018 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    Good point - does it make sense to discuss phenomena only in terms of the motion of particles?

    Discussion of everything in terms of a Lagrangian or Hamiltonian brings to mind classical mechanics where all there is to Nature is matter and all there is to matter is position and mass. When a physical theory attributes some other properties to matter besides position and mass, how does it formulate the behavior of those attributes in terms of a Lagrangian? - I mean: what is the general procedure? - incorporate those attributes as variables in a Lagrangian and write an equation involving those attributes and their time derivatives?

    ----

    In pondering some recent threads on the many worlds interpretation of quantum mechanics, it occurs to me that all physics (of various kinds) assumes there is a self-similarity to Nature. For example, the typical "physical law" is an equation with both variables and "constants". The constants are present to adapt the law to different situations. Presuming the law is useful, there must be many different situations in Nature that are at least approximately described by the law using appropriate constants. One can call this "discovering patterns in Nature". However, taking the physical law as the authority, one could also call it "Requiring Nature to have certain self-similarities".
     
  10. Jun 4, 2018 #9

    Fra

    User Avatar

    So you are seeking an "explanation" of the lawfulness of the universe? Ie. out of the landscape of all possible crazy theories, what is the attractor mechanisms that selects the actual theories that seem to describe our observations?

    That is good question, that i share with you!

    Your idea seems to be to average over all possible theories to see if we can statistcally find a unique average?

    IMHO, one complication here is that to even describe a space of possible theories, you need ANOTHER higher theory, and at some point this higher theory becomes an arbitrary random choice that can never be reduced to a rational choice as per reductionist thinking because then the higher theory gets more and more complex. Instead we need to find first a way to physically regulate this diverging tower of turtles, and assign physical meaning to the regularisation. My approach is to use the observing systems capacity to store and process information as the natural regulator.

    This way, for a given observer, there are some theories in the landscape that are simply too complex to be encoded and computed, and this naturally regulates the picture. Then one can in the usual block baed energy scaling understand decoupling of high energy DETAILS at lower energy from a causal perspective simply as throttling the communication implicit in the "interaction".

    So I think there is no true observer independent way to make the summation you envision, that is imo a fallacy. Instead think we need to see it in an evolutionary perspective, and ask ourselfes how two theories interact. If we, like I strongly suggest we must to, associate one theory with one observer, then two interacting observers are the same thing as two interactng theories. And you can only describe two interacting theories by a third theory.

    So a KEY to make progress in this mess, is to try to get a grip of the computational regularisation required here. Theories that are not computable by the host system are imo simply useless, and thus necessarily "wrong".

    One can interpret the implications of this both into string theory and other theories. So i think these questions are somewhat neutral fundamental.

    /Fredrik
     
  11. Jun 20, 2018 #10

    Urs Schreiber

    User Avatar
    Science Advisor
    Gold Member

    This is one way to give the question a good meaning.

    Maybe I may use the occasion to point out that, while Witten did look into the issue of background independence of open string field theory (OSFT), the question remained open, as he highlighted a few years later in section 3.3 of

    Edward Witten, Overview of K-theory applied to Strings (arXiv:hep-th/0007175)

    Namely, to properly account for brane/antibrane annihilation and nucleation in the background of the OSFT, captured by the K-theory classification of D-branes, a rough plausibility argument suggests that one would need to formulate OSFT on ##N## D-branes for "##N = \infty##". This is open.

    But I wonder whether one should not try to turn this around: Instead of trying to make large ##N## OSFT reproduce the K-theory classification of D-branes, maybe their K-theory nature needs to be built in at a more fundamental level, and instead a description in terms of stacks of ##N## D-brane be derived from that in special cases and under special conditions. That's where we are headed towards in Gauge enhancement of Super M-Branes (arXiv:1806.01115), see the exposition in the introduction.

    In any case, this points to what seems like a blind spot in essentially all of the contemporary discussion of brane engineering of gauge theories: It is known that the popular picture of a stack of ##N## D-branes carrying an ##SU(N)## gauge field is not actually an invariant description of string theoretic reality, but that such data only serves as a kind of coordinatization of K-theory (co-)cocycles.

    I guess the reason for this omission is that for the traditionally trained researcher, K-theory seems like an opaque or even contrived concept compared to traditional Chan-Paton style arguments. But if mathematics is to have a say in the foundations of string theory, then it is likely the other way around.
     
  12. Jun 22, 2018 #11

    MathematicalPhysicist

    User Avatar
    Gold Member

    Why summing over theories (infinite number of theories?) will give you a theory?

    I mean there could be some contradictory theories to each other, so why will it be logical to sum over both of them?!
     
  13. Jun 23, 2018 #12

    Fra

    User Avatar

    If one takes one the perspective implicit in my post #9 (that you need a third theory to compare other theories, and that you associate a theory with an observer), the rational answer to this question IMHO is:

    The contradictory theories implies that there is a physical interaction with the theories, and the generalisation of "adding contradictions" simlpy means "allow them to interact", and the outcome of the negotiation is the "sum". The "sum" is then the "expected theory of interactions", but one that is conditional upon the third observer.

    This is IMO analogous to how we "predict" an interaction between two subatomic particles (identified by gauges choices) by the thirds observers expectation of gauge equivalence between the first two. Only problem is that the third observer isnt and the top of the tower, its is just an internal information processing agent like the others.

    What I lack is the exact mathematical and algorithmic machinery to describe this process in a way that is self-consistent. To describe this "negotiation process" is the same thing as to describe the evolution of physical law, which is the selection process for the de facto actual physical laws we observer today in the universe. Its just that this is of course not a direct native summation like path integral! It has to be far more complex and in particualar it has be seen as part of an evolutionary process - not reductionist thinking.

    /Fredrik
     
  14. Jun 26, 2018 #13
    I will give a non-scientific answer (a "sofa philosopher", as Feynman said):

    Why is there something rather than nothing? I think the answer is the opposite: nothing is just a possibility. Then, there are many other possibilities. A more "logical" way of thinking is that ALL possibilities happen at the same time.

    Then, if we believe that all lagrangians span the whole set of theories (possibly wrong), and if we believe that summing is the right way to aggregate (big if), summing all lagrangians could give "something". For spin glasses or random field Ising model, this way of calculating works (of course, the way of thinking is completely different).

    Why should there be a single lagrangian which is the theory of everything? I think it is more "logical" that there is no preferred lagrangian, but that all lagrangians are summed over, and the resulting sum is what we see.

    In a sense, it is a generalization of sum over histories in quantum mechanics: in QM, we sum over solutions of a lagrangian. Here, we would sum over lagrangians.
     
  15. Jun 26, 2018 #14

    MathematicalPhysicist

    User Avatar
    Gold Member

    Who gauarantees that this sum over all lagragians even converges?, there are many possible, concievable lagrangians.
     
  16. Jun 26, 2018 #15
    Nobody. I agree with you. For this reason, before I have stated that my statements were not scientific. For sure, I should work out a specific proposal. Let me put it differently, though: when Feynman first explained others about his sum of paths, I am sure the others asked similar questions as you do. Feynman's genius was to work out the details, and to give a specific (and correct) answer.

    Maybe the answer must come from mathematics. Could there be mathematicians work with concepts like "the space of all lagrangians" or something similar?
     
  17. Jun 26, 2018 #16

    Dale

    Staff: Mentor

    @jordi your concept needs to be part of the professional scientific literature before it can be discussed here. The thread is closed for now.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook