How about the ergodic hypothesis applied to the histories in the state of microstates? An priori transition probability between two microstates(pure states), disregarding the time would then expected to be symmetrical. But this assumption, somehow gets necessary because once decouples the hamiltonian which contains the interesting physics, from the framework. Then one is forced to make alot of fundamentally unjustified assumptions to fill in the conceptual gaps.

I do not like that he hamiltonian is left out. I think in a proper inference the state spaces and the space of laws, should be unified and defined "in principle" operationally from the point of view of an inside observer. And axiomatisation for THAT, but in the same spirit of hardys papes is what i would like to see. This should have emergent evolving state space and emergent evolving law. The decoupling of the flow of time already makes me think axiom 1 is not acceptable.

I like the ambition of these things, but i think we need to incorporate more of the real conceptual issues.

The Hamiltonian comes from symmetry considerations - specifically the rather obvious idea probabilities of observational outcomes is frame independent. Strictly speaking though you are invoking the POR. See chapter 3 Ballentine.

It was Wigner who first realised the central importance of symmetry in QM - basically QM is the fundamentals of that paper by Hardy (or the two axioms found in Ballentine) and symmetry.

It is hinted at in a 'correct' treatment of classical mechanics such as found in Landau - Mechanics which I think should be compulsory reading for anyone before seriously studying QM. And Physics From Symmetry wouldn't hurt either: http://physicsfromsymmetry.com/

Yes, we discussed this before, and I do not expect that we agree here. I of course know about the symmetry principles. They are powerful constraints, but - limited as well. And that is even WHY they are powerful.

Once we "know" the symmetry, we can use it. But what is the empirical justification of the symmetry? Which physical process "informs" the observer about this symmetry? this is what i am worrying about. To think of symmetries as constraints written in the sky are to me too metaphysical, I want an empirical inferential justification. All we need is to axiomatise the inference system, not the symmetries. The symmetries should follow from interaction with the environment only.

Are you asking how the POR is tested? Well for one thing classical mechanics obeys it. How do you test F=MA - it's a definition. There are things in science that can not be tested directly - its just a fact - you obviously think that's a worry - no actual physicist (modern ones please and scientists - not philosophers) I am aware of does.

No, I was asking something deeper in the context of trying to axiomatize physics.

If physicists are expected to have fatih in this axiom system - enough to use it at its full power by constraining how a theory of nature must look like, then it is essential that we at least try to approximately attach the key abstractions in theory to something in reality.

Obviously you do not prove an axiom, you just need to prove consistency with the prior axioms. But this is not what i mean either.

The ideal case is that a clever theory should build on axioms that are close to easy to accept, this is what hardy also tries. And maybe most people accept the axioms, but i think that there is still too much baggage. Axiom1 is nontrivial if you really consider the physical limis of memory capacity and computating, from history or actual ensembles, to arrive at an expectation. This is indeed at the root of probability theory itself, not just physics. But probability theory as in mathematics, is one thing, but when you start to apply it to physics, the plausability requirements of axioms become stronger IMO.

If you think this is not worth analysing, then lets note that probability is somehow quite central to physics, classical and QM. I think its been taken too lightly. And lets distinguish between mathematics and physics.

Look up Euclid's axioms. Then look up Hilbert's axioms.

Do you see a difference? If so describe it - if their is no difference then explain why - that would mean, for example, explaining how something with length but no width can exist.

One is the kind of axioms physicists use - the other mathematicians. The difference lies at the foundation of the two disciplines.

I am not sure I understand your presumed argument from this comparasion and to the foundations of physics?

The difference between Euklides and Hilbert is around 2000 years, so the late 19th century critique against Euklides non-stringent meothds, such that arguments proofs where based on drawing geometrical figures - while valid critque today - seems a bit unfair to be honest and Euklides isnt here to defend himself ;)

Hilberts ideas holds a higher deductive level. But what can we expect even from a bright mind that lived over 2000 years ago? We might instead wonder what a person like Euklides might have done if he was born in the 20th century.

Also the critique of Euklides was not based on physics, its was more on the logical system level. I also see neither of Euklides nor Hilbert as physicists (or natural philosphers). I am not critiquing the foundations of probability theory - as a mathematical theory.

In physics OTOH, the probloem is different, so its hard to compare. The kind of postulates that are the assumed connection between model and experiment are fuzzy. We can never analyze that purely in terms of deductive logic. I think of physics as a tool to predict and control our environment in order to survive. Here development of matehmatics indeed goes hand in hand with physics. Ideally thinking as a physicists, it see the development mathematical theories here because they are of utility to us. This does not justify that we loose contact with nature, and mistake mathematical possibility for physical possibility.

But i think you mean that there are different levels of stringency in the logical system used. Then I agree.
And in physics there is a different level of "analysis" for how deep into the mud we need to attach our "postulates"?

Maybe we just disagree on how deep into the "physical mud" we need to have our foundation?

Euclid's axioms speak of things that don't really exist as if they actually do. Yet nobody, even 12 year old's its taught to, has any problems drawing diagrams and making deductions from it. Hilbert is entirely abstract - no attempt at all in made to make it an actual model of something.

That's the difference between axioms used in physics and those in math. Those in physics are written in a form that are directly applicable and assume a bit of 'common sense' on the part of the student, those in pure math are simply deductions from assumed axioms.

Probability is different again - both pure and applied mathematicians have the same axiom's - the Kolmogorov axioms. Applied mathematicians have zero problems at all applying it because as part of a probability course they gradually build up the idea, via example, what an event etc is, just like in Euclidean geometry in the form presented by Euclid the teacher guides you to that understanding. Philosophers argue what it means, but in practice its of zero concern.

Or maybe there is a lack of understanding of the difference between pure and applied math. In applied math you have a model that is mapped in some way to intuitive ideas we have of things like events, points, line, observations etc. It is part of applying it developing that intuition. Most of the time its so obvious nobody worries about it. When you studied Euclidean geometry did you say to your teacher - hey you said points have position and no size - these things you draw are not like that - how can that be? I think the piece of chalk that would be chucked your way would require considerable reflex to dodge, as the rest of the the class gave you a strange look. Of course actual points have size - but its obviously irrelevant to proving theorems etc. You are concerned about things that in practice are of no concern. Now in philosophy they likely worry about this sort of stuff - they worry about all sorts of strange things there - but in physics we use a bit of common-sense just like the teacher expected the class to use and without doubt you did without complaint when you were taught it.

If such things interest you - that's fine - but its not science - its philosophy and not what we worry about here.

As I see it, these things are of no major concern for just maintaining status quo ocurrent established scientific knowledge.

But I think they are of deepest concernt for making progressions in the foundations of physics.

Unfortunately I suspect a causal relation between these two statements of yours, which i also mentioned in the other thread ;-)

Yes, I extracted this point of you of yours from the other thread, and i certainly respect your perspective. Indeed I do realise that my perspective is in minority.

However, no researcher should be seriously discouraged by the failure of others. The optimistic attitude is obviously that all the other failed because they didnt do it the right way. Statistically most who think like that fail, but if none think like that progress will be stalled. What you label philosophy is IMO essential for progress of foundational physics.

But I am as sure that i will not convince you to you change your understand no more that you will change mine :)

If the words of Weinberg are not enough, and the utter failures he mentions of similar approaches to what you want to do, are not enough, then I think that's where we have to leave it.

This got me to thinking. There is some principal that a system should be able to evolve to maximum entropy. Perhaps, this can form the core of an argument that the transition probabilities are symmetric. This might also not pan out at all ...

Clever argument. The fly in the ointment is that since P(T|S)=P(S|T), then we always will have P(T)=P(S), even when not at equilibrium. The analysis needs to take place in a somewhat more complex context: P(T|S) manifests itself when a measurement is done for T, in initial state S, and vice versa for P(S|T).
It is sufficient to work with microstates (pure states). Symmetry for mixed states would then follow from linearity of the transition probabilities.

Yes if we are introducing another state (some macrostate) which "equiliirium" refers to, then one needs to make that explicit in the formulas, ie we are then not just talking about an uncertaint microstate, but also an uncertain macrostate. Its also along these paths, and when associating the macrostate to an observer as an inferential agent in its envirnoment, and ponder about its "expectations on evolution", then you can connect arrow of time to the observer depedent directions
that you start to get into the objections i had before.But i will not ramble too much about that, becauase is am quote sure most arent going to see my point anyway and this is the wrong place to explain my whole idea anyway. But i think there is alot of interesting stuff in these inferential structures! So good luck with your paper, without know what stance you take!

The paper was turned down by the journal, for reasons that I accept -- "this is an interesting mathematics article, but does not contain sufficient philosophical/conceptual insights to be publishable in ...". This is quite reasonable, given the nature of the journal. I raised two such insights in this thread

Symmetry of probability of state transitions.

The physical states (states that exist in reality) are topologically closed.

However, I did not make a point of underlining these, and other issues in the paper, as I am not trained in physics, and feel it would be presumptive of me to espouse physics to physicists. So, I am in something of a quandary ...