Hurkyl did not explained himself on the thread, plus the question is off-topic to that thread, hence this one.

So, suppose we have two models of the same thing, and all their predictions coincide. Why does it not make them equivalent?

More specifically, if these are mathematical models, could it be argued that predictions coincidence is only possible if actual math is the same (up to some kind of isomorphism, perhaps)?

Lorentz Ether Theory (LET) and Einstein's Special Theory of Relativity (SR) yield the exact same predictions; in fact they are mathematically equivalent. The two differ in their formulations. The Lorentz transformation is axiomatic in LET but is derived in SR. Moreover, LET assumes the existence of some unobservable absolute measurement frame while SR rejects this concept. The two theories are indistinguishable, yet almost all physicists eschew LET in favor of SR.

Hurkyl was talking about observational indistinguishability, which is considerably weaker than mathematical equivalence. Note that Hurkyl talked about observations, not predictions. Measurement error is part and parcel of observation. Measurement errors are one reason why observational indistinguishability does not entail equality.

For example, if I gathered just ten measurements of some random process, I would not be able to say much about the underlying process at all. Many quite different would pass various statistical rejection tests with such a small sample number. Some models would be rejectable if I gathered one hundred samples, more if I gathered 1000 samples. But even then, I might well have multiple models that explain the sample space but are quite different mathematically.

True, but in a thread that was said about phenomenons known for thousands of years and observed by now billions of people every single second. In this thread, I assumed that we can make any number of observations we desire.

Every modern physicists' viewpoint is that the theories are different because they have differ in their axioms. The physical world is much more than mathematics. Moreover, even in the realm of mathematics, one can have a situation where two theories yield identical results but differ in their axioms. The deeper theory will typically win the day.

Hurkyl said

to which you replied

Hurkyl talked about observations while you talked about predictions. Predictions are not observations.

That other thread discusses consciousness. In this thread you asked about mathematical equivalence. How can you possibly define consciousness mathematically, and then measure it?

1. Observations are acts of matching predictions to reality, aren't they? So, "observational indistinguishability" is, immediately, "[coincidence of] all their predictions".

2. Matematics is just another language, simpler and more efficient than our natural language. I am programmer, and my everyday job is to translate natural language of customers into formal machine language. Same way, mathematician translates natural language into formal math language. In my experience, the translation is always possible, once the concept is well-defined, so your claim that consciousness cannot be defined mathematically can only mean that you don't know what consciousness is yet. In the thread, Hurkyl (?) was requiring an experiment, which means he had multiple well-defined concepts of consciousness. Not only concepts, but full-blown theories that were supposed to be tested in an experiment. So you probably should ask him about defining the consciousness mathematically. However, all of this is not really relevant to the question of this thread.

Absolutely not! Observations are measurements of some real phenomena. They do not need to conform to some preconceived notion of reality.

The Michelson–Morley experiment, the observed precession of Mercury, and the braided F rings of Saturn did not match any extant prediction of reality. Each of these observations of these demanded that some new prediction of reality be devised that did a better job of conforming to what we knew was true based on observation.

Examples of unpredicted results abound in every branch of science. Serendipity still plays a very important role in science.

how do you measure anything without any theory behind it? we don't just invent some new observation tool in moments of divine revelation, and then look at it and say, "crap I didn't saw this before". M-M experiment was based on simple ether theory where the speed of light was supposed to change according to earth motion vs ether (and surprisingly it did not). mercury perihelion (?) precession would not ever been studied without prior newtonian theory that brought in very concepts of orbit, and perihelion (and predicted absence of its precession, so far as bodies other than sun and mercury were unaccounted). braided rings of saturn, again were against of our expectations to see plain unbraided rings.

You are intentionally missing the point. The M-M experiment was contrived to prove a point, but instead disproved it. The precession of Mercury was a longstanding unexplained phenomena until GR. Neither of these observations conformed with predictions.

The theory behind a measurement device is often distinct from the application of that device. The solid-state physics and optics behind the cameras on board Voyager, Hubble, and other remote sensing vehicles have little to do with astronomical theories, and yet those cameras are used every day to aid in our understanding of the universe.

While experimentalists often investigate things to confirm or disprove some theoretical prediction, they sometimes investigate some thing just because nobody has investigated that thing yet. (One has to do something novel to earn a PhD, after all.)

Serendipity (seeing something completely unexpected) remains a very important aspect of science. Theoretical astronomers developed the concept of supermassive black holes after experimental astronomers observed the star S2 orbiting Sagittarius A* at incredible speeds, an incredibly serendipitous discovery. Armed with this, experimentalists intentionally looked for signs of supermassive black holes elsewhere. Now that we know what to look for, it appears that most galaxies have a supermassive black hole at their core.

Some other examples: Bednorz and Müller were looking at cuprate-perovskite ceramics for a perfect insulator. Instead, they found that LaBaCuO was exactly the opposite: a superconductor. Penzias and Wilson were looking for ways to reduce radio noise. Instead, they found the Cosmic Microwave Background Radiation.

Hmm. Let us start over. We have theory 1, theory 2 and theory 3 that, as far as we know, yield completely same predictions; so, whenever we measure (observe) anything related to their predictions, we cannot choose between the three based on our observations, and hence no experiment exists, so far, that would allow us to choose between the teories.

In this regard, you are saying that they are still different because their axiom sets are different. Mmm, fine. After all, if I spell "whilst" instead of "while", I use different character sets, and this itself enough to claim the words are different, but... are words really different (that's rhetorical).

So... how do you propose to choose a theory under above conditions?

D H is quite correct. In MANY advances in physics, plenty of things came without any prior predictions. In other words, these came about very unexpectedly. In fact, a famous phrase attributed to I.I. Rabi says "Who ordered that?" associated with the discovery of the muon back in the 50's. Superconductivity, CP violations, fractional quantum hall effect, etc.. all came without any hint of its existence prior to discovery.

Look at these two in classical mechanics:

1. The Newtonian approach of using "forces" to solve for the equation of motion;

2. The Lagrangian/Hamiltonian approach of "least action principle" where "forces" are irrelevant.

Both arrive at the identical results, but if you look rather carefully, they actually have quite distinct "axiom", if you want to call it that, and approach a problem in rather different manner.

Perhaps an even more poignant "classical" reminder is that it was the concept of some "caloric fluid" (a massless, invisible substance that WAS heat) that lead to the modelling of the heat flow equations, quite consciously an analogy to ordinary fluid flow.
Later it was shown that the same equation could be derived on basis of a totally different (and more acceptable) view of heat transport, i.e in terms of kinetic energy transfer between colliding molecules.