Ensembles in quantum field theory

In summary, according to the author, one cannot repeatedly prepare a quantum field extending over all of spacetime. One must instead model the behavior of a particular quantum system, the lab, within a relativistic QFT. This is done by describing the particles within the system and their interaction, and then calculating the resulting scattering cross sections.
  • #71
vanhees71 said:
What should be given at the end of the calculation is the distribution function of the silver atoms at the plate. I don't see this in your pdf.
I had only promised the derivation of #60, where there is no second plate at ##x_3>0## where the silver would be absorbed.

Before modeling the measurement at a second plate I first want to be sure of your agreement to this part. For if we cannot agree on the physics without the measurement we'll never agree on stuff that depends on this.

vanhees71 said:
I understand sect. 1, and it looks correct. I have no idea what's done in sect. 2.
Section 2 is a special case of Section 1, where I apply it to the situation discussed in posts #1-#60. All formulas are standard and have a complete derivation, with only very elementary steps omitted. Thus you'd be easily able to verify its correctness.
 
Physics news on Phys.org
  • #72
As I said, I don't understand Sect. 2. I can't judge, whether it's right or wrong.
 
  • #73
vanhees71 said:
As I said, I don't understand Sect. 2. I can't judge, whether it's right or wrong.
Where is the first formula which you don't understand? Then I can explain....
 
  • #74
I don't understand anything of Sect. 2. Do you want to work in the 1st or 2nd-quantization formalism?

In the 2nd-quantization formalism the single-particle Hamiltonian looks like (in the Heisenberg picture)
$$\hat{H}=\int_{\mathbb{R}^3} \mathrm{d}^3 x \sum_{\sigma} \hat{\psi}_{\sigma}^{\dagger}(t,\vec{x}) \left [-\frac{\hbar^2}{2m} \Delta + \mu_{\text{B}} g_s \vec{S} \cdot \vec{B}(\vec{x}) \right] \hat{\psi}(t,\vec{x}).$$
The non-zero equal-time (anti-)commutation relations are
$$[\hat{\psi}_{\sigma}(t,\vec{x}),\hat{\psi}_{\sigma'}^{\dagger}(t,\vec{x}')]_{\pm}=\delta^{(3)}(\vec{x}-\vec{x}') \delta_{\sigma \sigma'}.$$
 
  • #75
vanhees71 said:
I don't understand anything of Sect. 2. Do you want to work in the 1st or 2nd-quantization formalism?
I work in second quantization, just with a simplified notation compared to yours.
vanhees71 said:
In the 2nd-quantization formalism the single-particle Hamiltonian looks like (in the Heisenberg picture)
$$\hat{H}=\int_{\mathbb{R}^3} \mathrm{d}^3 x \sum_{\sigma} \hat{\psi}_{\sigma}^{\dagger}(t,\vec{x}) \left [-\frac{\hbar^2}{2m} \Delta + \mu_{\text{B}} g_s \vec{S} \cdot \vec{B}(\vec{x}) \right] \hat{\psi}(t,\vec{x}).$$
This is exactly equation (3) from Section 2,
$$H_0:=\int dx a(x)^*H_1(t,x,\widehat p)a(x),$$
just written with far fewer (in the latex 43 rather than your 214) characters, and I used the Schrödinger picture. (Mathematicians are lazy, and simplify their typography to the essentials.) The integration domain, the dimension, and the spin indices are suppressed (to avoid index errors as in your formula), so are your hats and vecs. The annihilation operators are denoted by $a$ rather than $\hat\psi$, which is as in https://en.wikipedia.org/wiki/Creation_and_annihilation_operators . The daggers are replaced by stars,
and the detailed form of the Hamiltonian is not described (but ##\widehat p## is the momentum operator) . Does (3) now make sense?
vanhees71 said:
The non-zero equal-time (anti-)commutation relations are
$$[\hat{\psi}_{\sigma}(t,\vec{x}),\hat{\psi}_{\sigma'}^{\dagger}(t,\vec{x}')]_{\pm}=\delta^{(3)}(\vec{x}-\vec{x}') \delta_{\sigma \sigma'}.$$
Since silver is Fermionic, this is exactly my third formula in Section 2,
$$a_j(x)a_k(y)^*+a_k(y)^*a_j(x)=\delta_{jk}\delta(x-y)$$
but with ##j,k## in place of your Greek spinor indices.

With this dictionary, the remainder is now perhaps understandable, or where do you need further explanations?
 
Last edited:
  • #76
Ok, the only difference is that in the Schrödinger picture the ##\hat{\psi}_{\sigma}(\vec{x})## have no time argument. Otherwise you can condense your notation such that it gets unrecognizable. I know that mathematicians tend to do this, but when it comes to do really calculations saving this little effort in notation often leads to a huge efford to disentangle the so introduced confusion ;-)).

The ##\hat{a}##'s usually denote annihilation operators for single-particle energy eigenstates.
 
  • #77
vanhees71 said:
Ok, the only difference is that in the Schrödinger picture the ##\hat{\psi}_{\sigma}(\vec{x})## have no time argument. Otherwise you can condense your notation such that it gets unrecognizable.
So, do you now understand the remainder of Section 2, including (6) and the explanations afterwards? Or what else needs explanation?
vanhees71 said:
The ##\hat{a}##'s usually denote annihilation operators for single-particle energy eigenstates.
Weinberg uses ##a##'s for annihilation operators in QFT.
 
  • #78
I don't understand the remainder of Sect. 2. I've no clue, what your aim is. If the notation is so different from the standard notation, it needs more explaining text.

Weinberg uses standard notation. The ##\hat{a}##'s are annihilation operators for energy eigenstates and occur in the corresponding mode decomposition of the free-particle operators in the interaction picture.
 
  • #79
vanhees71 said:
I don't understand the remainder of Sect. 2. I've no clue, what your aim is.
May aim is to prove the final sentence of Section 2, which is the statement given in #69 about the form of the Hamiltonian in the interaction picture.

vanhees71 said:
If the notation is so different from the standard notation, it needs more explaining text.
I can add explaining text only if I know what more to explain.

In the sentence containing (3) I say what what ##H_1## is, what ##a(x)## is, and I list the commutation rules in the next sentence. Then I give a definition of what I mean by ##a(f)## - which is the same as in the Wikipedia link given in #75. Similarly I explain everything else.

Thus in my view I explained every bit of notation. It is standard in mathematical physics; e.g., Derezinski and Gerard.

So please let me know where more explanation is needed, and I'll add it to make it more understandable.
 
Last edited:
  • #80
I don't have the time right now to learn a completely new notation, and I don't know what you want to achieve. The Hamiltonian for particles moving in an external magnetic field is given in #74. It looks the same in all pictures of time evolution. I don't know, which kind of interaction (two-body?) Hamiltonian you want to add. A simple two-body interaction potential contribution reads
$$\hat{H}_{\text{int}}=\frac{1}{2} \sum_{\sigma_1,\sigma_2} \int_{\mathbb{R}^3} \mathrm{d}^3 x_1 \int_{\mathbb{R}^3} \mathrm{d}^3 x_2 \hat{\psi}_{\sigma_1}^{\dagger}(t,\vec{x}_1) \hat{\psi}_{\sigma_2}^{\dagger}(t,\vec{x}_2) V_{\sigma_1 \sigma_2}(\vec{x}_1-\vec{x}_2) \hat{\psi}_{\sigma_2}(\vec{x}_2) \hat{\psi}_{\sigma_1}(\vec{x}_1).$$
 
  • #81
vanhees71 said:
The Hamiltonian for particles moving in an external magnetic field is given in #74. It looks the same in all pictures of time evolution. I don't know, which kind of interaction (two-body) Hamiltonian you want to add.
An interaction representing the presence of a source. We had already discussed it in posts #41 and #52; it is needed since we model only the halfspace ##x_3\ge 0##; the remainder of the lab (in ##x_3<0##) is too complex to be modeled.
 
  • #82
But this isn't described by a two-body interaction. Maybe it's something like ##J(x) \hat{\psi}(x)+\text{h.c.}## with a c-number source ##J##.
 
  • #83
vanhees71 said:
But this isn't described by a two-body interaction. Maybe it's something like ##J(x) \hat{\psi}(x)+\text{h.c.}## with a c-number source ##J##.
But a Hamiltonian cannot have terms linear in an odd operator; we discussed this before. Thus a 2-body term is the simplest possibility!
 
  • #84
vanhees71 said:
But this isn't described by a two-body interaction. Maybe it's something like ##J(x) \hat{\psi}(x)+\text{h.c.}## with a c-number source ##J##.
A. Neumaier said:
But a Hamiltonian cannot have terms linear in an odd operator; we discussed this before. Thus a 2-body term is the simplest possibility!
I found the proper way to do it. Thus the description in #60 should be updated as follows:

A silver source outside the beam, switched on at time ##t=0##, produces silver particles, approximately one particle per second that quickly (in much less than a second) proceed along a beam initially along the ##z## axis. The beam is subject to a weak external magnetic field ##B(t,x)## that may be switched on gradually. No other forces are assumed to be present; in particular no air or obstacle. Since silver is heavy, we consider a nonrelativistic quantum field description.

For this particular preparation procedure, the potential is given in the interaction picture by
$$V_I(t):=\int dx v(t,x)a(x)+h.c.,$$
where the source
$$v(t,x)=\sum_h v_h(t,x)b_h,$$
is a linear combination of hole annihilators ##b_h##. It is proved in Section 2 of the attached text that each ##v_h(t,x)## satisfies the single-particle Pauli equation
$$i\hbar \partial_t v_h(t,x)=V_I(t,x) v_h(t,x)$$
with the single-particle interaction
$$V_I(t,x):=-\mu B(t,x)\cdot I$$
used by Potel et al., where ##\mu## is the magnetic moment and ##I## is the spin operator.

The preparation considered also implies that, before the magnetic field is switched on, we may assume that each ##v_h(t,x)## is a narrow complex Gaussians peaked at ##x_3=0## at second ##h## that moves along the ##x_3## axis and flattens with time (as described in Section 1.2 of your lecture notes).

Do you now agree?
 

Attachments

  • SternG24.pdf
    43.8 KB · Views: 88
Last edited:
  • Like
Likes gentzen
  • #85
Disclaimer: I've only scanned through the many comments here. I'm responding to the initial post in a way that I didn't notice reflected anywhere else. My apologies if it is.

My theoretical approach to how I see that question has been to take any quantum model to be constructed as an algebra of operators that are used as models for statistical trials. The algebra is generated by a finite set of operators ##\hat\phi_1, \hat\phi_2, ..., \hat\phi_n##, then we introduce a state over those (measurement) operators, which allows us to GNS-construct a Hilbert space that contains other states that we can use to describe different experiments. I take it we can only know what we can measure, so the measurement operators we have in hand (or "in algebra") is the limit of what we can say about the state.

For QFT, I see this modified only by taking the algebra to be generated by a finite set of test functions, which give an indication of how each experiment is placed in space and time relative to others. The index set is a finite set of test functions instead of a finite set of integers, so we have ##\hat\phi_{f_1}, \hat\phi_{f_2}, ..., \hat\phi_{f_n}##. The state that we introduce is, by convention and axiom, maximally symmetric, which for a Wightman field requires the "vacuum" state to be manifestly Poincaré invariant. For free fields for which the vacuum state is Gaussian, it's enough to require the 2-operator function ##\langle 0|\hat\phi_f^\dagger\hat\phi_g|0\rangle## to be a positive semi-definite manifestly Poincaré invariant bilinear functional of the test functions (which completely fixes the Gaussian state). [ If we want to construct the usual idealized quantized Klein-Gordon field, with the Fock space as its Hilbert space, we can take the "direct" or "inductive" limit, in which, in very broad brush terms, the set of test functions is increased without limit until it is the Schwartz space of test functions that are defined on Minkowski space and that are smooth and have a smooth Fourier transform. ]
We have to ask how we know what test functions we should use for a given experiment. I think the answer to that is not easy: insofar as it exists, the rules are given in thick textbooks on quantum optics and AMO and other physics. The ultimate test will always be consistency, which will evolve over decades.
To me, it seems clear that if the mathematics we use as models for "measurements" (which I think I prefer to call "statistical trials") is that of operators and operator algebras, as an algebraic presentation of a specific class of Generalized Probability Theories, then we will always be able to write down some finite index set and the operators will be ##\hat\phi_1, \hat\phi_2, ..., \hat\phi_n##. If we do more experiments, we will introduce more operators and the Hilbert space we construct will be bigger. For the idealized QFT case, we take the index set to be the Schwartz space on Minkowski space, which introduces much more extensive and detailed operational questions.
2¢.
 
  • Like
Likes gentzen and vanhees71
  • #86
The only caveat is that in QFT Poincare invariance in the present form is realized by the microcausality constraint, i.e., the field operators themselves, whose equations of motion are defined, in the Heisenberg picture, by local unitary representations of the proper orthochronous Poincare group, usually are not representing observables, but the observables are defined by operator products of these field operators fulfilling the microcausality constraint.

For gauge theories in addition these local "observable operators" also have to be gauge-covariant. That's why, e.g., there is no consistent definition of a "photon-number-density operator", and that's why there is no naive particle interpretation for the photon (in addition to the fact that there are no bona fide position observables for them and thus the localizability of single photons is even less possible than for massive "particles") and that's why the true "intensity measure" of the electromagnetic field is the (gauge-invariante Belinfante) energy density rather than "photon-number density".

This is of course also to be taken with a grain of salt since a mathematically rigorous formulation of interacting quantum fields in (1+3) dimensions has still not been achieved. The above thus has to taken as realized in the usual "perturbative way" with all the regularization/renormalization procedures which circumvent all the fundamental quibbles related to, e.g., Haag's theorem (non-existence of the interaction picture).
 
  • Like
Likes LittleSchwinger
  • #87
vanhees71 said:
This is of course also to be taken with a grain of salt since a mathematically rigorous formulation of interacting quantum fields in (1+3) dimensions has still not been achieved. The above thus has to taken as realized in the usual "perturbative way" with all the regularization/renormalization procedures which circumvent all the fundamental quibbles related to, e.g., Haag's theorem (non-existence of the interaction picture).
Going a little off-topic in response to your third paragraph, I venture to say that I've given a mathematically rigorous formulation of a class of interacting quantum fields in (1+n)-dimensions in my "A source fragmentation approach to interacting quantum field theory", https://arxiv.org/abs/2109.04412. I can't show that it's empirically supported, so there's that (though you didn't insist on it:smile:), and I choose to weaken one of the Wightman axioms so that ##\hat\phi_f## is a nonlinear function of the test function ##f(x)##, so there's also that, but I'm pretty sure it's rigorous and interacting, et cetera. The models I exhibit are all toy models, so I'll forgive you if you don't even have a look (though it's only a short paper, so go on).
Concerning Haag's theorem, did you see the preprint that dropped yesterday, "How Haag-tied is QFT, really?", https://arxiv.org/abs/2212.06977?
 
  • Informative
Likes gentzen and vanhees71
  • #88
Yes, that's a nice article about Haag's theorem. I'm more at the side of the "phenomenologists", i.e., take the usual perturbative approach with renormalization is the "solution" of the problem with Haag's theorem. It's of course a disappointment that there's not a rigorous formulation along the goals of the axiomatic QFT program yet. On the other hand, all our physical theories are so far "effective theories", and we don't have a theory of everything yet, and it's not guaranteed that we'll ever find one!

I'll have a look at your paper, but I'm not so familiar with the axiomatic approach.
 
  • Like
Likes LittleSchwinger and Peter Morgan
  • #89
vanhees71 said:
Yes, that's a nice article about Haag's theorem. I'm more at the side of the "phenomenologists", i.e., take the usual perturbative approach with renormalization is the "solution" of the problem with Haag's theorem. It's of course a disappointment that there's not a rigorous formulation along the goals of the axiomatic QFT program yet. On the other hand, all our physical theories are so far "effective theories", and we don't have a theory of everything yet, and it's not guaranteed that we'll ever find one!

I'll have a look at your paper, but I'm not so familiar with the axiomatic approach.
Renormalization works. Yes. I think we can make it more rigorous, however, by paying much closer attention to how we use the test functions that describe an experiment. As we have had it, that description of an experiment has been taken to control the regularization and renormalization scales implicitly and with hand-waving, only saying, very crudely, that a given experiment probes the atomic scale or weak scale (or Planck scale), say, but I think there's good reason for mathematical physicists to insist on explicitness and detail. Showing that this is mathematically workable needs the Reeh-Schlieder theorem, I think, but the idea doesn't really need an axiomatic approach.
I'm trying to write a new version, which has been slow going for the last year but I've started to feel up to improving it, so that I have a new figure that I hope might be helpful:
1671229116085.png
 
  • Like
Likes vanhees71
  • #90
Is this somehow related to the Epstein-Glaser approach (as used in Scharff's book "finite quantum electrodynamics"? That's of course also very understandable for a "phenomenologist": Measurements are always made with real-world devices, which are always "coarse graining". You never measure the position of a "particle" at a precise point in space and time but you register a particle at a finite (small) region in space (e.g., a "pixel" at a silicon chip) and time. The same holds for momenta. That's all encoded also in the sloppy formalism used by "phenomenologists": Even on this sloppy level of the mathematics a plane-wave (momentum eigenstate) is not a true state but a distribution, and real states have to be square integrable and thus you don't have, e.g., a "photon at a precise momentum and energy" but a finite-width spectral function (although this width can be made very narrow of course).

Another thing is that this hierarchy of resolution is a gift for physicists, given the historical development of science, i.e., they could deal with pretty classical physics first, before they discovered that underlying this "effective model of the world" there's need for a more abstract and less familiar "quantum model of the world", which however leads to the conclusion that the "classical description" is valid only as an "effective theory" for coarse-grained macroscopic observables.

The next level then was "atomic physics". At least as long as you only deal with the lightest atoms, a non-relativistic quantum theoretical description of electrons and a classical treatment of the electromagnetic (for the most simple cases just static Coulomb fields) is sufficient and pretty accurate. Then came relativistic corrections, which were nicely treatable within perturbation theory, needed to explain, e.g., the "fine structure" of atomic spectra.

Field quantization, ironically, was rejected by the physics community as "overdoing it". Indeed, in the 1st paper by Born and Jordan, where they put Heisenberg's famous "Helgoland idea" to a more solid ground ("matrix mechanics"), there's a chapter about the quantization of the electromagnetic field is contained, but lead to the above mentioned reaction of many quantum physicists. That's why Dirac had to come to the same conclusion a few years later, when he again quantized the e.m. field (within his operator formalism, of course) to explain "spontaneous emission" needed to get the black-body Planck spectrum and to understand the "natural line width" of atomic spectral lines, i.e., the instability of the atomic excited states due to coupling to the quantized em. field.

Of course, the divergence of higher-order perturbation theory was immediately apparent, and it took some years, to discover the idea of renormalization. This was again triggered by the necessity to explain the "Lamb shift" for the hydrogen atom. The first calculation was by Bethe treating the atom non-relativistically and then it was also figured out for the full relativistic theory (Feynman, Schwinger) and also for "scalar QED" (Weisskopf, Pauli).

In this sense the fact that Nature doesn't need to be explained by a "theory of everything" but in steps of "ever finer resolution" (or observing at "ever higher energies") is a gift for model building, and it's pretty sure that also all our currently used QFTs (including the Standard Model on the yet most "fundamental" level but also the effective theories needed to describe hadrons like (unitarized) chiral perturbation theory) are indeed "effective theories" as well.

If we are unlucky there's realy the "great dessert", i.e., the next energy scale of new physics may indeed be the Planck scale, and then it will be very difficult to get glimpses of "physics beyond the Standard Model" leading theory into the right direction.
 
  • Like
Likes LittleSchwinger and Peter Morgan
  • #91
vanhees71 said:
Is this somehow related to the Epstein-Glaser approach (as used in Scharff's book "finite quantum electrodynamics"?
I think only rather loosely, insofar as they use an extra test function to modify the dynamics. They still aspire to work with the Wightman axioms unchanged, so that there is only a linear dependence on the test functions that we use to describe the apparatus. Epstein-Glaser is generally not taken to successfully solve or evade "The Renormalization Problem" (if one accepts that there is such a thing).
vanhees71 said:
Another thing is that this hierarchy of resolution is a gift for physicists, given the historical development of science, i.e., they could deal with pretty classical physics first, before they discovered that underlying this "effective model of the world" there's need for a more abstract and less familiar "quantum model of the world", which however leads to the conclusion that the "classical description" is valid only as an "effective theory" for coarse-grained macroscopic observables.
The relationship between classical and quantum mechanics is the subject of my recent post, "The collapse of a quantum state as a joint probability construction" (which links to an article in JPhysA 2022). I can't tell whether you've seen that? One premiss: CM has been straw-manned. By steel-manning CM so it has a noncommutative measurement algebra, we can ensure its measurement theory is the same as that of QM. CM+, as I call that extended CM, has the same measurement theory as QM and is as empirically effective, but it nonetheless is different so we can learn something about QM from CM+.
We can work with either CM+ or QM models, as we like, according to this, but of course this is not (and cannot be) a naïve return to CM. By analogy with the idea of Schrödinger and Heisenberg pictures, we can introduce the idea of a "super-Heisenberg" picture that puts the unitary dynamics and the "collapse" dynamics into the measurement algebra, with the quantum state unchanging. This also makes contact with the idea of Quantum Non-Demolition measurement and Quantum-Mechanics–free subsystems in an article by Mankei Tsang and Carlton Caves in PhysRevX 2012. Another perspective is that Generalized Probability Theory is classically much more natural than has been recognized: a book by George Boole in 1854 (which I discovered only through a paper by Abramsky in 2020, which cites a paper by Pitowsky in BJPS 1994: why is this not widely known?!?) effectively lays out why it is necessary for classical theory to go beyond an uncomplicated probability theory (if probability theory can ever be called uncomplicated).
We can, in other words, think significantly more classically if we are sophisticated enough. Of course many people see this and run away from the crazy person, but not everyone.
vanhees71 said:
Field quantization, ironically, was rejected by the physics community as "overdoing it".
I can totally sympathize with this as an empiricist, because field theories introduce so many degrees of freedom that we could not possibly determine the initial state by experiment. My feeling, however, is that although we can't measure everything everywhere and everywhen, we can imagine measuring to finite but arbitrary accuracy within some limited region of the parameter space, and that seems to me enough to introduce field theories as an ideal limit of arbitrary accuracy. That's effectively what you say in this quote...
vanhees71 said:
In this sense the fact that Nature doesn't need to be explained by a "theory of everything" but in steps of "ever finer resolution" (or observing at "ever higher energies") is a gift for model building, and it's pretty sure that also all our currently used QFTs (including the Standard Model on the yet most "fundamental" level but also the effective theories needed to describe hadrons like (unitarized) chiral perturbation theory) are indeed "effective theories" as well.
I'm totally on board with the idea that our current theories are "effective theories". [Have you seen the YouTube channel All Things EFT?] Indeed, I think the construction I offer in 2109.04412 allows the construction so many manifestly Poincaré invariant theories so easily that to me it makes the effectiveness of any given theory significantly more transparent.
 
  • Like
Likes vanhees71
  • #92
vanhees71 said:
In this sense the fact that Nature doesn't need to be explained by a "theory of everything" but in steps of "ever finer resolution" (or observing at "ever higher energies") is a gift for model building
As I see it, it's not a "gift" nor "lucky coincidence", I think it's more that "effective theories" is all an inside agent can construct, it's likely the "best inference" given a certain perspective or energy scale and information processing capacity etc. And that any "ultimate" one fixed theory of everything, that is independent of the context (agent/observer) is not inferrable. Ie. an effective theory is the best inference of a predictive model of the admittedly incomplete observable parts of the universe, that the context(agent/observer) can handle/encode.

But I think that even if you embrace effective theory in this sense (which I also do), I think the question that still begs for an answer is how any two such contexts (encoding two difference effective theories of the same universe) interact. Ie. what is the physics of the interacting agents. If you keeping thinking the agents are humans then this is no longer physics, it's social interaction theory somehow, and not physics. But if we see that agents can be physical systems, then the questions seems relevant.

This is the key question, that I think is not treated satsifactory in the current standard models and paradigm. And this question is clearly related to the deeper questions such as unification of all known interactions in a conceptually coherent way.

So even if we are fine with "effective theories", it seems irrational to think that the kind of "theory space" where all these theories has no structure or logic or having no physics content, such as thinking this is just a problem of "human science". I can imagine Karl Popper or similar thinkers would suggest that this is not a question for science, but I just can't get myself to accept that. It's just that the answer we seek probably is not a "naive" fixed TOE cast in the same "form" as the effective theories, this is why I think another paradigm for thinking about theory and dynamics is required but I see this perfectly consistent with the effective theory view, at any energy scale. And the basic criteria of falsification/corroboration, is instead complemented by a higher order traits such as fitess or flexibility of revision, after taking a "falsification hit". A bad theory living in a nonflexible theory spacve will be falsified and die. A good theory should learn from it's genereate an improved offspring, and if the offspring was randomly generated from big bang on each occasion, no progress would likely ever take place. So falsification events are not problems, I seem as just natural bumps in evolution....

Some of these questions has also existed in the trouble of string theory landscapes etc, but it seems, without much progress, but this question would not end even if string theory does. I think the answer to how two agents encoding different effective theories would interact physically, is the flip side of the same coin where you have a duality between two seemingly different theories. This seems abstract and it's a challenge to see what relevant this has to the physical world, and this difficulty is I think because we are analysing all this in a given outdated paradigm of "physical theory".

/Fredrik
 
  • Like
Likes Peter Morgan

Similar threads

  • Quantum Interpretations and Foundations
Replies
13
Views
632
  • Quantum Interpretations and Foundations
2
Replies
57
Views
2K
  • Quantum Interpretations and Foundations
3
Replies
84
Views
2K
  • Quantum Interpretations and Foundations
Replies
29
Views
1K
  • Quantum Interpretations and Foundations
9
Replies
309
Views
8K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
Replies
0
Views
1K
  • Beyond the Standard Models
Replies
1
Views
171
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
Back
Top