Insights Superdeterminism and the Mermin Device

  • #31
PeroK said:
It's perhaps not a fair question but what odds would you give on SD being that mechanism? Would you bet even money on this experiment proving QM wrong? Or, 10-1 against or 100-1 against?

I wonder what Hossenfelder would be prepared to bet on it?
Personally I wouldn't bet on any classical deterministic mechanism in the first place. I find both non-locality and superdeterminism equally unappealing. Both mechanism require fine-tuning. However, if I had to choose, I would probably pick superdeterminism, just because it is less anthropocentric. There are non-local cause and effect relations everywhere but nature conspires in such a way to prohibit communication? I can't make myself believe that, but that's just my personal opinion.
 
  • Like
Likes physika, gentzen, martinbn and 1 other person
Physics news on Phys.org
  • #32
Nullstein said:
If DrChinese claimed that it was wrong
As I have already said, he didn't. He never said anything in the Deutsch paper you referenced was wrong. He just said it wasn't relevant to the entanglement swapping experiments he has referenced. He also gave a specific argument for why (in post #11, repeated in post #24).

Your only response to that, so far, has been to assert that the Deutsch paper is relevant to the entanglement swapping experiments @DrChinese referenced, without any supporting argument and without addressing the specific reason @DrChinese gave for saying it was not.

Nullstein said:
Since I quoted that paper
I don't see any quotes from that paper in any of your posts. You referenced it, but as far as I can tell you gave no specifics about how you think that paper addresses the issues @DrChinese is raising.
 
  • #33
Nullstein said:
Personally I wouldn't bet on any classical deterministic mechanism in the first place. I find both non-locality and superdeterminism equally unappealing.
As Sabine points out, if you want a constructive explanation of Bell inequality violation, it's nonlocality and/or SD. That's why we pivoted to a principle account (from information-theoretic reconstructions of QM). We cheated a la Einstein for time dilation and length contraction.
 
  • #34
PeterDonis said:
As I have already said, he didn't. He never said anything in the Deutsch paper you referenced was wrong. He just said it wasn't relevant to the entanglement swapping experiments he has referenced. He also gave a specific argument for why (in post #11, repeated in post #24).
He just claimed it didn't apply without pointing out, which assumption in the paper would be violated. The paper discussed the most general formulation of quantum teleportation without any restrictions about spatial or temporal order (or can you point to any such restriction in the paper?), so it applies to the special case of entanglement swapping.
PeterDonis said:
Your only response to that, so far, has been to assert that the Deutsch paper is relevant to the entanglement swapping experiments @DrChinese referenced, without any supporting argument and without addressing the specific reason @DrChinese gave for saying it was not.
That's not true. I addressed his argument multiple times (post #27 and #29). Entanglement swapping is a special case of quantum teleportation, which is discussed in the paper in full generality. I even said that in the very post you currently quoted, but you just cut it away.
PeterDonis said:
I don't see any quotes from that paper in any of your posts. You referenced it, but as far as I can tell you gave no specifics about how you think that paper addresses the issues @DrChinese is raising.
See post #29, which you just quoted. I gave you exactly the section which contains the argument and explained why it applies (again: entanglement swapping is a special case of what is discussed in the paper).
 
  • #35
Nullstein said:
The open question is the mechanism and superdeterminism is one way to address it.
I agree this this but my issue with superdeterminism, is that from my perspective it is a kind of "mechanism" to justifiy a particular kind of philsophical perspective, but empty of explanatory value, and in particularly lacking algorithmic insights for an "agent"?

/Fredrik
 
  • Like
Likes DrChinese and martinbn
  • #36
martinbn said:
Can you clarify. How can there be no common overlap? Any two past lightcones overlap.
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.

"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests. To make this intuition concrete and to fully grasp the extent of nonlocality in entanglement swapping experiments, it seems appropriate to contrast them with the predictions of local models where systems that are initially uncorrelated are described by uncorrelated local variables. This is the idea that we pursue here."


Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
 
  • #37
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.
It is probably the phrasing that you use that confuses me, but I still don't understand. For example the past light cone of the event = production of the second pair of photons, contains the whole life of the first photon, that no longer exists. All the possible light cones that you can pick intersect.
 
  • #38
Nullstein said:
I find both non-locality and superdeterminism equally unappealing. Both mechanism require fine-tuning.
How does non-locality require fine tuning?

Nullstein said:
However, if I had to choose, I would probably pick superdeterminism, just because it is less anthropocentric.
How is non-locality anthropocentric?

Nullstein said:
There are non-local cause and effect relations everywhere but nature conspires in such a way to prohibit communication? I can't make myself believe that, but that's just my personal opinion.
There is no any conspiracy. See https://www.physicsforums.com/threa...unterfactual-definiteness.847628/post-5319182
 
  • #39
  • #40
RUTA said:
Well, my understanding is that the experiments will check for violations of randomness where QM predicts randomness.
Then the proponents of “superdeterminism” should tackle the radioactive decay.
 
  • Like
Likes jbergman and vanhees71
  • #41
martinbn said:
If I understand you correctly you are saying that it is (or might be) possible but it has not been found yet.
In principle, yes.

martinbn said:
Doesn't this contradict the no communication theorem?
No, because in theories such as Bohmian mechanics, the no communication theorem is a FAPP (for all practical purposes) theorem.

Roughly, this is like the law of large numbers. Is it valid for ##N=1000##? In principle, no. In practice, yes.
 
  • #42
"Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them."

Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation. Unless we proceed under the assumption that hidden conspiracies of this sort do not occur, we have abandoned in advance the whole enterprise of discovering the laws of nature by experimentation."
 
  • #43
DrChinese said:
You'd think so! But these photons never existed in a common light cone because their lifespan is short. In fact, in this particular experiment, one photon was detected (and ceased to exist) BEFORE its entangled partner was created.

In this next experiment, the entangled photon pairs are spatially separated (and did coexist for a period of time). However, they were created sufficiently far apart that they never occupied a common light cone.

High-fidelity entanglement swapping with fully independent sources (2009)
https://arxiv.org/abs/0809.3991
"Entanglement swapping allows to establish entanglement between independent particles that never interacted nor share any common past. This feature makes it an integral constituent of quantum repeaters. Here, we demonstrate entanglement swapping with time-synchronized independent sources with a fidelity high enough to violate a Clauser-Horne-Shimony-Holt inequality by more than four standard deviations. The fact that both entangled pairs are created by fully independent, only electronically connected sources ensures that this technique is suitable for future long-distance quantum communication experiments as well as for novel tests on the foundations of quantum physics."

And from a 2009 paper that addresses the theoretical nature of entanglement swapping with particles with no common past, here is a quote that indicates that in fact this entanglement IS problematic for any theory claiming the usual locality (local causality):

"It is natural to expect that correlations between distant particles are the result of causal influences originating in their common past — this is the idea behind Bell’s concept of local causality [1]. Yet, quantum theory predicts that measurements on entangled particles will produce outcome correlations that cannot be reproduced by any theory where each separate outcome is locally determined by variables correlated at the source. This nonlocal nature of entangled states can be revealed by the violation of Bell inequalities.

"However remarkable it is that quantum interactions can establish such nonlocal correlations, it is even more remarkable that particles that never directly interacted can also become nonlocally correlated. This is possible through a process called entanglement swapping [2]. Starting from two independent pairs of entangled particles, one can measure jointly one particle from each pair, so that the two other particles become entangled, even though they have no common past history. The resulting pair is a genuine entangled pair in every aspect, and can in particular violate Bell inequalities.
None of these articles are in contradiction to what I said. Again, nobody doubts that entanglement swapping produces Bell pairs that violate Bell's inequality. The question is: Does entanglement swapping add to the mystery? And the answer is: It does not. The article by Deutsch applies to all these experiments and therefore shows that nothing mysterious beyond the usual mystery is going on.
DrChinese said:
"Intuitively, it seems that such entanglement swapping experiments exhibit nonlocal effects even stronger than those of usual Bell tests.
You should rather have highlighted the word "intuitively," because one may come to this conclusion intuitively. But a more complete analysis just shows that nothing about entanglement swapping is any stronger or any more mysterious than ordinary Bell pairs.
DrChinese said:
Despite the comments from Nullstein to the contrary, such swapped pairs are entangled without any qualification - as indicated in the quote above.
You misrepresent what I wrote. Of course the entanglement swapped pairs are entangled, I said that multiple times. It's just that no additional mystery is added by the process of swapping. All the mystery is concentrated in the initial Bell pairs themselves. The swapping processs has a local explanation.
 
  • #44
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
 
  • Skeptical
  • Like
Likes physika and PeroK
  • #45
Quantumental said:
I have never understood why there is such a visceral oppositional response to superdeterminism while the Many Worlds Interpretation and Copenhagen Interpretation enjoy tons of support. In Many Worlds the answer is essentially: "Everything happens, thus every observation is explained, you just happen to be the observer observing this outcome". In Copenhagen it's equally lazy: "Nature is just random, there is no explanation". How are these any less anti-scientific than Superdeterminism?
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #46
Lord Jestocost said:
Why is "superdeterminism" not taken seriously. As Shimony, Clauser and Horne put it:

"In any scientific experiment in which two or more variables are supposed to be randomly selected, one can always conjecture that some factor in the overlap of the backwards light cones has controlled the presumably random choices. But, we maintain, skepticism of this sort will essentially dismiss all results of scientific experimentation.
I don't want to defend superdeterminism, but I have trouble seeing the difference to placebo effects in controlled medical studies, where it is randomly determined which patient should receive which treatment. Of course, we don't tell the individual patients which treatment they got. But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.

I have the impression that the main difference to the Bell test is that for placebo effects in medicine we can make experiments to check whether such an influence is present. But the skepticism itself that such an influence might be present in the first place doesn't seem to me very different. (And if the conclusion is that non-actionable skepticism will lead us nowhere, then I can accept this. But that is not how Shimony, Clauser and Horne have put it.)
 
  • #47
gentzen said:
But we would naively not expect that it has an influence whether the experimenter (i.e. the doctor) knows which patient received which treatment. But apparently it has some influence.
Whether this is true or not (that question really belongs in the medical forum for discussion, not here), this is not the placebo effect. The placebo effect is where patients who get the placebo (but don't know it--and in double-blind trials the doctors don't know either) still experience a therapeutic effect.

I don't see how this has an analogue in physics.
 
  • #48
In any scientific experiment in which two or more variables are supposed to be randomly selected
gentzen said:
but I have trouble seeing the difference to placebo effects in controlled medical studies
PeterDonis said:
I don't see how this has an analogue in physics.
I am not suggesting an analogue in physics, I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.

I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
 
  • #49
gentzen said:
I only have trouble to "see" where that scenario occurring in controlled medical studies was excluded in what Shimony, Clauser and Horne wrote.
I don't see how the two have anything to do with each other.

gentzen said:
I also tried to guess what they actually meant
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
 
  • Like
Likes DrChinese and Lord Jestocost
  • #50
kith said:
I think that all three somewhat break with simple ideas about how science is done and what it tells us but to a different degree.

Copenhagen says that there are limits to what we can know because we need to split the world into the observer and the observed. The MWI does away with unique outcomes. Superdeterminism does away with the idea that we can limit the influence of external degrees of freedom on the outcome of our experiment. These might turn out to be different sides of the same coin but taken at face value, the third seems like the most radical departure from the scientific method to me.

There's also the point that unlike Copenhagen and the MWI, superdeterminism is not an interpretation of an existing theory. It is a property of a more fundamental theory which doesn't make any predictions to test it (NB: Sabine Hossenfelder disagrees) because it doesn't even exist yet.

I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics. As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
 
  • #51
PeterDonis said:
Consider the following experimental setup:

I have a source that produces two entangled photons at point A. The two photons go off in opposite directions to points B and C, where their polarizations are measured. Points B and C are each one light-minute away from point A.

At each polarization measurement, B and C, the angle of the polarization measurement is chosen 1 second before the photon arrives, based on random bits of information acquired from incoming light from a quasar roughly a billion light-years away that lies in the opposite direction from the photon source at A.

A rough diagram of the setup is below:

Quasar B -- (1B LY) -- B -- (1 LM) -- A -- (1 LM) -- C -- (1B LY) -- Quasar C

In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (not the correlations between the measurements, those will be as predicted by QM, but the statistics of each measurement taken separately) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C. This is the sort of thing that superdeterminism has to claim must exist.
Exactly. On p. 3 of Superdeterminism: A Guide for the Perplexed, Sabine writes:
What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so.
 
  • #52
PeterDonis said:
In this setup, any violation of statistical independence between the angles of the polarizations and the results of the individual measurements (...) would have to be due to some kind of pre-existing correlation between the photon source at A and the distant quasars at both B and C.
Can you prove this? Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient? You are basically postulating a mechanism for how the violation of statistical independence could have happened, and then point out that this mechanism would be implausible. Of course it is implausible, but the task would have been to prove that such an implausible mechanism is the only way how violation of statistical independence can arise.
 
  • #53
Lord Jestocost said:
Then the proponents of “superdeterminism” should tackle the radioactive decay.
I don't study SD, so I don't know what they're looking for exactly. Maybe they can't control sample preparation well enough for radioactive decay? Just guessing, because that seems like an obvious place to look.
 
  • #54
gentzen said:
Can you prove this?
Isn't it obvious? We are talking about a lack of statistical independence between a photon source at A and light sources in two quasars, each a billion light-years from A in opposite directions. How else could that possibly be except by some kind of pre-existing correlation?

gentzen said:
Is this not similar to claiming that in the scenario occurring in controlled medical studies, any correlations would be due to correlations in the sources of randomness and the selected patient?
Sort of. Correlations between the "sources of randomness" and the patient would be similar, yes. (And if we wanted to be extra, extra sure to eliminate such correlations, we could use light from quasars a billion light-years away as the source of randomness for medical experiments, just as was done in the scenario I described.) But that has nothing to do with the placebo effect, and is not what I understood you to be talking about before.
 
  • #55
Quantumental said:
I disagree. Copenhagen's "there are limits to what we can know about reality, quantum theory is the limit we can probe" is no different from "reality is made up of more than quantum theory" which SD implies. It's semantics.
Maybe. But with this notion of semantics the question is: what topic regarding interpretations and foundations of established physics isn't semantics?

Quantumental said:
As for MWI, yes, but by doing away with unique outcomes (at least in the most popular readings of Everettian QM) you literally state: "The theory only makes sense if you happen to be part of the wavefunction where the theory makes sense, but you are also part of maverick branches where QM is invalidated, thus nothing can really be validated as it were pre-QM". I would argue that this stance is equally radical in its departure from what we consider the scientific method to be.
I find it difficult to discuss this because there are problems with the role of probabilities in the MWI which may or may not be solved. For now, I would say that I don't see the difference between my experience of the past right now belonging to a maverick branch vs. a single world view where by pure chance, I and everybody else got an enormous amount of untypical data which lead us to the inference of wrong laws of physics.
 
  • #56
Nullstein said:
"Quantum nonlocality" is just another word for Bell-violating, which is of course accepted, because it has been demonstrated experimentally and nobody has denied that. The open question is the mechanism and superdeterminism is one way to address it.
Right, as Sabine points out in her video:
Hidden Variables + Locality + Statistical Independence = Obeys Bell inequality
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
 
  • Like
Likes Lord Jestocost and DrChinese
  • #57
gentzen said:
I also tried to guess what they actually meant, namely that if there is nothing you can do to check whether your skepticism was justified, then it just stifles scientific progress.
Indeed, I would say it violates the principles of sound inference which is presumably the foundations of the scientific process. Without evidence, it seems the natural and sound assumption is to not assume dependence. To assume and invest representing existence of an unknown causal mechanism, seems deeply irrational to me. It is similar to the fallacy one comitts to when postulating and ad hoc microstructure with an ergodic hypothesis. Extrinsic ad hoc elements corrupt the natural inference process, and thus explanatory value. Superdeterminism is IMO such an extrinsic ad hoc element. I wouldn't say it's "wrong", I just find it an irrational thought from the perspective of learning, and the scientific progress is about learning - but in a "rational way".

/Fredrik
 
  • #58
Demystifier said:
How does non-locality require fine tuning?
In order to reproduce QM using non-locality, you don't just need non-locality. You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information. That's very much akin to the no free will requirement in superdeterminism. As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.
Demystifier said:
How is non-locality anthropocentric?
Because communication is an anthropocentric notion. If the theory needs to be fine tuned to make communication impossible, this is thus a very anthropocentric requirement.
 
  • Skeptical
  • Like
Likes PeroK and gentzen
  • #59
RUTA said:
Once you've committed to explaining entanglement with a mechanism (HV), you're stuck with violating at least one of Locality or Statistical Independence.
If you insist that by mechanism, we mean an explanation in terms of hidden variables. I rather except that we need to refine our notion of what constitutes a mechanism. Before Einstein, people already knew the Lorentz transformations but they were stuck believing in an ether explanation. Even today, there are some exots left who can't wrap their heads around the modern interpretation. I think with quantum theory, we are in a quite similar situation. We know the correct formulas, but we are stuck in classical thinking. I guess we need another Einstein to sort this out.
 
  • Like
Likes Fra and martinbn
  • #60
Nullstein said:
You need to tune the equations in such a way that they obey the no communication property, i.e. it must be impossible to use the non-local effects to transmit information.
For me, it is like a 19th century critique of atomic explanation of thermodynamics based on the idea that it requires fine tuning of the rules of atomic motion to fulfill the 2nd law (that entropy of the full system cannot decrease) and the 3rd law (that you cannot reach the state of zero entropy). And yet, as Boltzmann understood back than and many physics students understand today, those laws are FAPP laws that do not require fine tuning at all.

Indeed, in Bohmian mechanics it is well understood why nonlocality does not allow (in the FAPP sense) communication faster than light. It is a consequence of quantum equilibrium, for more details see e.g. https://arxiv.org/abs/quant-ph/0308039
 

Similar threads

  • · Replies 105 ·
4
Replies
105
Views
8K
Replies
21
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 72 ·
3
Replies
72
Views
28K
  • · Replies 65 ·
3
Replies
65
Views
6K
Replies
62
Views
10K