Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The role of false info in the Copenhagen Int.

  1. Mar 23, 2009 #1
    I dont like CI but lets play by its rules for a while

    So, in the mainstream CI wavefunction is non physical: it is just our subjective 'knowledge' of the system.

    So, I use an experimental device: I put a cat inside the box, and press a button. The device inside the box works based on the entanglement of the photon. I measure it on another side.

    On another side I see the red lamp and I say: Cat is dead. This is how the device was supposed to work. The wavefunction of a cat collapsed to the dead one.

    Then I open a box and to my surprise I see alive cat! It appears that the guy who helped me to build that device had lied to me, or made a mistake: he connected red wire tothe green lamp and vice versa.

    Of course you would say: well, nothing strange: the information was incorrect. So when the probability given by the wavefuction after collapse is confirmed by the experiment, it confirms CI. When it contradicts the experiment, then the initial assumptions (about the device) were incorrect.

    Do you see the problem here?

    Do proponets of the CI have answers if wavefunction collapses as well based on the false information?
  2. jcsd
  3. Mar 23, 2009 #2


    User Avatar
    Science Advisor
    Gold Member
    Dearly Missed

    I like this story.
  4. Mar 23, 2009 #3


    User Avatar

    Good example. I'll try get to back this later, I have something to say on this.

    I think I'm close to CI, although my view is more radical and implies more than interpretatation only.

  5. Mar 23, 2009 #4


    User Avatar

    As I see it. The meaning of "wavefunction not beeing physical" is just "not real" in the classical realist sense. But then again, classical realism is nothing to me.

    I agree it is sort of "subjective information", BUT, there is a physical basis for this information. So it's still real, but not "real" as in the realist sense. Also this subjective information is the basis for the observers action.

    The observer, acts as per the information it has, because it has no other choice. There is no communication to an external judge, that can measure the correctness of information. The evaluation of the information, has to be done by the observer itself, from the inside. There is no other way.

    So an observer, responds and acts, upon ALL information, even what you call false one, as per the same consistent logic.

    It's how I see it, and I have no conceptual problems with that.

    I'd say that you illustrate the case, where the communication channel cannot be trusted. And this is a very real situation.

    As I see it, to ask if information in itself, is true or false, makes no sense. I think it must be put in the proper context. What is the purpose of information, and what is the utility of an observer to be informed? I'd say he has a better chance of survival, if we can chose his actions to gain himseld.

    An observer, gets information about it's environment. This information is the basis of how the observer acts. The observer, responds as per the information it has about it's own environment. They key here as I see it, that there is no meaning in talking about right or wrong information here. Information in itself is neutral. And the observer responds equally well to "false information" as it does to "correct information".

    Compare with game theory. A player simply has no other choice but to evaluate, and respond to whatever information it has. It has no access to an "external judge".

    Now to the point.

    If he bases his actions on "wrong" information, this usually means that the observers actions, is somewhat in contradiction to, or "is fighting" the environment, and for sure the feedback the observer will get from that will cause a backreaction on his own information to perform a self-correction, or destruct the inconsistent parts, like a kind of "mutation".

    The proper context for the view I line out, is an evolutionary one. There exists no universal, observer indepdent measure of if information is true or false. But if you accept the reasoning I briefly tried to communicate, this is not a problem. A relative evolution does not stand or fall with absolute measures of true or false. The fallacy to start with, is the illusion that it ever makes sense to talk about information as true or false. It can only by in favour of, or against other information. And when there is a conflict, a contradiction, then there is a mutual selective pressure that with produce a negotiation and a new consensus.

    Not sure if that made sense, but it's my very personal view. And I consider it to be a conservative extension of the CI reasoning as adcovated by Bohr. It aims to remove the fictious dependence on a classical reference, but instead introducing the abstraction of an evolving observer.

  6. Mar 23, 2009 #5


    User Avatar

    So what I am suggesting, is that it makes no sense to analyze this in a constructive way unless you view it in the context which is that the observer is acting, based upon his information, and this actions, produces backreactions(feedback) from the environment, which ultimately drives the evolution of the observer information, and information compression (clearly a fixed mass observer, can not retain all historicla information), so there is a decision problem here.

    The problem exists only when you do not take the evoltuionary perspective, and ask yourself if information is right or wrong. My point is that the correctness of the information gets consequences for the observers evolution.

    One might argue that this is more than standard CI, and it sure is. But I see it as a natural extension to CI. No twisted multiverse scenario is needed, neither is an hidden variable reasoning needed.

    To just stick with interpretations only, I think there is no way around, and away from the fact that QM is weird. To reduces this weirdness I think we have to understand QM in a bigger context, and then a strict reinterpretation only I think doesn't help.

  7. Mar 23, 2009 #6


    User Avatar

    In a game theoretic setting, say stock markets, strategy games or anything. In reality plain, desinformation, or simply uncertain communication channels are real, but it does not invalidate the game itself. Rather it's part of the game. But such errors will be self-correcting in the sense that an inconsistent, contradictory view will never survive except in a transitory sense.

    Last edited: Mar 23, 2009
  8. Mar 24, 2009 #7


    User Avatar

    Another point IMHO:

    It's not possible to draw the certain conclusion that you've been tricked by the previous measurement device.

    Your eyes in this example are also a kind of measurement device.

    So all you can for sure conclude, is that the latest information from your eyes, are in contradiction with your expectation. If you insist on the right/wrong polarization, one could only say that either the first measurement device tricked you, or your eyes tricked you, or your implicit assumption that the cat in the box is a closed system was wrong.

    So either the premise is wrong, or the logic by which you infer the expectation is wrong.

    I don't think it makes sense to say which is right and which is wrong, all the inside observer can see, is an inconsistency.

    My idea is that now, this inconsistency can be measured if you consider how to "count evidence", implying a reconstruction of the mathematical abstraction for information. Similarly the clashing pieces of information can be weighted, and there should be a statistical resolution, where the different weights can be thought of as the inertia of the parts.

    I think what remains to understand, is exactly how this information evolution is like. so somehow beeing wrong, in the sense of making decisions and basing action on uncertain information, is the correspondence to variation. the smallness of this variation is given by the relative confidences. It's simply unlikely that the information are so uncertain so as to self-destruct, because if that happened, another structure would probably emerge instead.

    So the observers, and measurements devices that actually are seen, have been chosen by evolution, therefore it's highly unlikely to see massive observers that are inconsistent. They would not survive.

  9. Mar 24, 2009 #8
    But no information is 100% reliable.

    I dont know what tht fact is discregarded by the CI proponents. I had never read anything about the false information in the context of CI

    With false information we need to invent another thing: the UNcollapse.

    Lets say we can make a conclusion based on the fact that we received a photon (or not received it). So, we did not receive it and wavefunction had collapsed based on the measurement. But ooops, the detector was offline! So, the perious information was false. At that moment when we realize it wavefunction uncollapse to the previous, unmeasured state.

    Do you agree?
  10. Mar 24, 2009 #9


    User Avatar
    Science Advisor

    I am not a proponent of CI. Nevertheless, I think I know how a clever proponent of CI might respond. Here it is:
    In your case the information was not wrong. Instead, it was your SUBJECTIVE INTERPRETATION of the correct information that was wrong. CI is not a theory of consciousness so it cannot explain how wrong subjective interpretation by a human may happen. Instead, CI is a theory of information exchanged between unconscious objects (whatever they are) that are not able to create their own interpretations of the information they receive.
    Does it make sense to you?
  11. Mar 24, 2009 #10


    User Avatar

    True. This is IMHO the basis for the fundamental variation in the evolving scheme. The smallness of the uncertainty is also a result of the evolving scheme. There is a self-stabilisation here.

    I think the plain CI is too simple. However, the information emphasis is I think the good part. I like CI, but of course, to sit and come up with interpretations that makes no difference is not what I have in mind. Given standard QM, CI is what is closest to me.

    But it seems clear to me that problems with QM, is far more serious than can be solved with reinterpretations and keeping the formalism. I somehow defend CI, since i think of it as the minimalist interpretation.

    I haven't read much on this either that is very explicit. What I wrote is mainly based on my personal views.

    Also the basic positions of CI, can live on even in modified formalism.

    If there is a defendable mechanism for that deduction, then I agree.

    In the way I see things, a superposition can be emergent.

    I am still working on some own ideas here that aren't yet mature, but the main idea is that we have two communication channels, feeding us with information about two different microstructures, when their information "merge" at the observer, the result is a superposition. The superposion can be seen as a new microstructure, that constitutes the optimum representation or maximal retention of useful information, given the constrain of limited memory capacity of the observer.

    IMO, the trick to see what a superposition is, is like a kind of solution to a maximation problem, where the task is to invent a representation, that allows the observer to encode as much USEFUL information about it's environment as possible, given the obvious constraint that the full time history datastreams of the communication channels can not possibly be stored. The observer has to discard information, and the superposition is a steady state optimum representing maximum stability.

    One problem I'm still struggling with is howto mathematically describe the evolution of these microstructures that represents "compression algorithms" used to compress the timehistory in a way that an optimal truncation can take place, with a minimum of information loss. Just like an MP3 algorithm, is a very good compression that gives a very high sound quality / bandwith ratio. I am optimistic that quantum logic will ultimately be understood as a unique solution to a specific optimation problem. Thus the ad hoc postulation of the weird quantum logic will not be necessary.

    But I am very confident that the key spirit from CI, that the key concept is state of information, rather than state of some realist world is the one that will survive. The concept of information is to me pretty much also the essence of the "inside view".

    The principle of locality is implemented so that a local observer, responds only to local information. This is a neutral statement, and this action doesn't depend on the correctness of the information. Because in fact, there is no established measure of correctness. Instead, correct statements are the one that are stable, and thus consistent with it's local environment.

  12. Mar 24, 2009 #11


    User Avatar

    I think this is also a good way of phrasing it. I can agree on this.

    I would however add, that the entire notion of what interpretation is correct and what is not, lacks basis. I think it's even the point, that ALL subjective interpretations (consistent with local constraints of information capacity) are a priori allowed, and then which views that end up populated in a given environemnt is a matter of evolution.

    There is only one way to "test information" for the observer, and that is to play the game!
    Act accordingly, and let the feedback decide, and again, act according to that. And this is exactly how I picture how physical interactions proceed. The information is evolving. Information is only judged relative to the prior without absolute scales.

  13. Mar 24, 2009 #12
    I will review all your answers later, for now just few notes.

    Doesnt it all mean that CI is recursive not only on the macro/micro level (behavior of particles is described based on 'knowledge' of huge systems called observers, while observers are collections of QM particles)

    But it is also recursive on the information level: We analyze everything based on the wavefunction which is in fact our knowledga, but our knowledge comes for our OTHER observations (like is detector really connected)?
  14. Mar 24, 2009 #13


    User Avatar

    I don't declare myself as a CI advocate, such as the original ones, Bohr etc. My responses are based on my view, which sees CI as a decent interpretation, given that we are talking about interpretations only. But I argue that if you take it seriously, you get suggestions for revisions of the formalism. Such as evolving hilbert spaces etc. The inconsistecy in the standard CI is that it doesn't acknowledge the information implicit in the choice of the hilbert space. I think this can be cure while still keeping the best parts of CI (the information perspective). If you acknowledge this background info, hilbert spaces and classical observers, then the consquence is that they themselves evolve.

    Recursive is fine descriptor for me. The apparent circular definition of information as I am sure bugs some of us, is instead the basis for the evolution. The apparent difficulty to arrive at certain static consensus, is exactly the reason for evolution.

  15. Mar 24, 2009 #14
    Recursive is a bad thing, it does not allow an axiomatisation, what can you tell about the pure QM systems without observers? Check my example below.

    In fact it is a modified form of Wigner's friend experiment, which I call "new found planet".

    So, there was a planet outside our cosmological horizon, so we were not able to observe it - in principle. But after few billion years, it appeared in our light cone, so we were able to communicate with the civilization on that planet and one proponent of CI even arrived there.

    Upon the arrival aliens tells him the history of the planet, beginning from the Queen 0x00000001 up to the Emperor 0xFFFFFFFF The Great. But he replies "Your history, guys, is a total fake: until the moment we have noticed you in our telescopes and thus collapsed your wavefunctions, you did not have anything - no kings, queens, just a pure mess of wavefunctions"

    Angry aliens hit him hard for such words, and he collapses on the floor :)

    My point here is that if CI claims the importance of the subjective view ("knowledge"), and if you accept the recursive view on CI (so it covers both level, microscopic and macroscopic), it should give consistent predictions not only on the QM level, but on the microscopic level too. Claims about the subjective experiences must be consistent (if they are used by CI), which is not the case in my example.
  16. Mar 24, 2009 #15


    User Avatar

    I might add mroe later but here is quick response.

    Axiomatisation is not a proper strategy for learning IMHO. It is the consequence if you see knowledge as an accumulating body of facts. Ie. a scientist is never wrong, he is just incomplete. That view has problems.

    What reason do you have for expecting axiomatisation in the first place?

    I think I see what you mean. Your view is I think _probably_ more common than mine. But from my point of view, the expectation you have for consistency, is just that, an expcetation.

    Fundamentally, the inconsistency, is even the key to evolution. I may not have explained it well but, there are degrees of inconsistneices. And the consitency is not achieved in finite processes, no more than an inside observer with certainty can infere a symmetry transformations that recovers consistency with othre observers.

    I think you insist of a kind of birds view. But several others do the same, and most probably I'm in minority. The birds view is exactly the key of the my objection.

    The consistency you seek, is emergent in my view. So there are processes, where this consistency is for all practical purposes achieved. But it's not fundamental. Therein lies our difference.

  17. Mar 24, 2009 #16
    I see the slippery words we are using not as a potential of evolution, but rather a weakness of modern physics.

    Remember the history of chemistry? It began from words: take the liquid mercury, put in it a dead frog, found on a cemetery, add a blood of a virgin... blah blah. Depending on what frog was used alchemists obtained different inconsistent results.

    When chemistry became mature all such recepies had been replaced with unambigious formulas all people agree upon.

    When physics become mature all that 'wave-paticle dualism and stuff' will be replacec with pure formulas all physicists agree on so words will be used just to give comments but not to explain the actual work.

    All that 'interpretation' stuff, like 'our knowledge about the system et cetera' is nothing more then magic spells achemists used as they believed that those spells had actually helped their chemical reactions.

    Yes, I insist on the bird's view. Few TOE equations, which can explain everything from the first principles. Only upwards - from TOE to Standard Model, Atoms, Observers and downwards - only as an antropic principle.

    Could you explain, why you dont expect that we will ever have a bird's view theory?
  18. Mar 24, 2009 #17


    User Avatar

    Of course any sensible person demands hard arguments on how this is going to make a difference. The slippery words are just indicative of the slippery road we are all unavoidably on.

    I've only briefly indicated some of my reasoning, but the real argument is to have real progress in physics on the table. We don't have that yet. Until then, we're all free to place our bets. That would be to actually solve some of the open problems in physics, in a way that can be experimentally tested.

    Then your questions make sense. I at least see a consistency in your questions, even though I may differ.

    I somehow hoped that some of the arguments should follow from the previous responses, but there are several ways to comment this. But more slippey words might not help. The hard arguments are yet not here.

    I am probably not so alien to a kind of birds view as you might think. The main difference is that I see it as emergent and evolving. Thus I do not believe in a "hard" birds view, thus you might say that I think that all the frog ever can see is the shadow of some birds view, but the frog can not hope to exactly describe this projection. It's rather that a group of interacting frogs, may effectively agree upon a birds view that restores consistency in their interactions. But then, there can always be a larger group! But if the size of a frog is constant, then there appears a kind of horizon where the frog can not SEE the larger structure, and the frogs best shot is simply indisntuigshable from THE birds view. There is a limit to how larges structures that are MEASUREABLE from the point of view of the inside.

    But I do not think that this birds view is necessariyl the same if you compare two remote group of frogs, unless they have spent an eternity to equilibrate.

    But if I ask this.

    How do you intend to verify that any candidate birds view is accurate? How do you distinguish this from just your best guess?

    If you can't do that, we are in quite close agreement. To me, this subtle difference does make a difference for my own reasoning, and methodology. I have no illusions of classical realism. Alot of people, including several famous ones, seem to want to replace the indeterminism that QM introduces by a sort of birds view of symmetries, that supposedly takes the place of classical realism.'t Hooft seem to be keen on this. I reflected on this in the this thread: https://www.physicsforums.com/showthread.php?t=237264&page=3 let me know what you think of 't Hooft's ideas.

    I think you asked for some arguments behind the information view, I tried to give it. But I'm the first to admitt their slippery nature.

  19. Mar 24, 2009 #18
    What makes you think that if you increase the number of frogs the birds view will be more and more complex?

    If TOE equations are simple then adding more frogs does not give anything new, like adding more moving bodies does not change GR. Looks loike you dont believe you can print a t-short with TOE equations on it?

    There are 2 options:
    1. You believe in the infinite 'ladder' of theories like Newton physics->QM->Standard model->Minimal supersymmetric theory->?->Superstrings->? so theory N is a low-energy limit of a theory N+1
    2. You are talking about non-fundamental physical laws - laws, which can not be derived from the fundamental ones (statements regarding the behavior of the complicated system or statements undeducable using the fundamental laws)

    To give you an example, in formal arithmetics Peano axios are the fundamental laws. Still, there is an undeducable Goedels statement G. Statement 'G is true' is undeducable and hence it is true. This is an example of 'non-fundamental law' in the world of numbers.
  20. Mar 24, 2009 #19


    User Avatar

    It's not so much that I actually expect the worse, as that I am "prepared for the worse".

    For one thing, for combinatorical reasons the complexity increases. This does not imply that the laws are complex. It just fails inflates the complexity boundary.

    Smolin asked a question in one of his talks of time and evolving law that is quite relevant here: What exactly is the meaning of eternal law, if the age of the universe is only 14 billion years?

    The logic is the same as where during 14 billions years you have only seen white swans, and therefore never in the future will you see anything else. Your expectation is well founded, but not the certainty.

    This is a possibility for sure. But not the only possibilitiy, which is close to my point.

    Not quite. Possibly, the closest thing would then be the induction step capturing if possible the logic of a self-modifying algorithm :) That might fit on a t-shirt. The problem is that the print would probably be hard to distinguish from the t-shirt itself. It would have to be an evolving t-shirt, in which case no print would even be necessary - the t-shirt would sort of speak for itself ;)

    I'm not sure what you mean here. What I am talking about are observable laws, laws that an observer can decode from experiment or interactions. The point with laws, are to be able to predict the future. So the behaviour of an observer, IMHO, reflects indirectly it's "opinion" of law.

    A group of observers that have been interacting for a long while, are expected to equilibrate to the point where there IS a birds view. But this birds view, is then de facto confirmed only within the local group.

    It doesn't even make sense to compare the physical laws, between two non-interacting groups of observers. I take Rovelli's stance here that comparasion of observations only take place by physical communication.

  21. Mar 24, 2009 #20


    User Avatar

    You commented that the slippery words refered evolution to our understanding of physics, rather than time. IMO, they can be treated on a similar footing. That's why I use similiar expressions when I talk about time evolution in a physical system, and humanities evolution of science. The two processes has similarities IMHO.

    A group of observer, could be earth based scientists, and no doubt is the outcome of science a consensus of knowledge, which in principle all scientists can confirm. There are processes in science by this consensus is achieved, and maintained.

    A group of observers could also be neighbouring atoms in a gas. Certainly at this state of the evolution of the universe, the laws of physics governing one atom, is the same as the laws of physics governing another atom on the other side of earth. There has been plenty of time for equilibration.

    But if in theory, we picture two parts of the universe which didn't have sufficient time to interact, then differences in the physical laws could persist.

    In most direct scenarios, these issues seems insignificant, but in the general case, where the complexity of the observer, is small relative to the large scale isotropy of conditions then to understand the logic of the game taking place at that fundamental level, such as the planck domain for exmaple, I think requires a brutally honest description of what we know. Because in a crazy game, the difference between what you think is true, to the best of your planck scale brain, and what really is true could I think be dramatic.

    The idea is that the inside view, from the point of view of say a planck scale system or thereabout, interacting with it's environment, is not too dissimilar an earth based lab harbouring cosmological theories. It's related to relative complexity of observer and observed. Here the comparasion is clear if you compare the complexity of a earth based measurement device, relative the entire universe, vs a subatomic particle relative to the environemnt.

    The same reason why the logic that is on one hand is appropriate to study particle physics is Inappropriate for doing statistics on cosmological level, is possibly why we don't see the logic of hte standardmodel and how it is to be understood and merged with GR principles.

    I think main utility of this perspective is that it may give a new way of analysing the problems at hand, by strongly emphasising the inside view. Becase the expectations and therefore actions, is based on whatever is known from the inside view.

    As you scale down an observer, the laws and actions are bound to become simpler and simpler. This is why the interactions we know of must fit in an hierarchy as well. This is normally called the high energy limit, but I prefer to call it the low-complexity limit, since the observers participating in these interaction are simple encode simple information. Thus the laws of physics, seen from the inside, must become almost trivial at some point. The question is then to understand how it scales back up, as the small observers combine to form more complex observers, and thereby also distinguishes more complex laws.

    In all this, I think the "inside/subjective INFORMATION" an how it evolves, is the key perspective.

    WIthout the "inside perspetive" you are typically lead to the fine tuning problems and initial value problems of the universe. The evolution idea here, supposedly solves those problems.

  22. Mar 25, 2009 #21

    I agree with you: yes, on the low-energy limit, which is at the same time a high-complexity limit the number of laws are infinite: chemistry, medicine, economics, computer science.

    There are several reasons why such laws can not be derives from using the first principles from the fundamental laws:

    A. (most common) Properties of complex systems can not be described in the terms of the simpler underlying theory. For that reason statements about these properties can not be derived from the fundamental laws.

    Example: Fastfood makes people fat.
    We can not define a predicate IsFastfood(QM state of particles), so this statement can not be verififed, in principle, using the fundamental laws.

    B Undecidable statements (in Goedel's sense). Lets skip them for now.

    So no theory can cover all aspects of what frogs can see. So TOE (where E stands for EVERYTHING) is not possible. But lets talk strictly about the MFT (Most Fundamental Theory) ? The theory of the high-energy limit?
  23. Mar 25, 2009 #22


    User Avatar

    My opinion is that strictly speaking, there is not CLEAR LINE between what you call fundamental laws, and non-fundamental ones in the sense that all frogs would agree upon the line drawn. This does, howeve not bother me, but in your reasoning the existence of such lines seems to be implicit?

    In my abstraction, they are treated on the same footing. The difference lies mainly in that the "fundamental laws" as you call them are more stable. And often, effectively fixed. But effectively fixed and fundamentally fixed is still a difference. My quest is to describe the relation here, and the evolutionary relation between laws.

    But if we are talking about the fundamental laws, in the sense of "as fundamental as is possible", then, on human level I think we can do alot of progress that will be "sufficiently fundamental" to be important. But I think they way of thinking, and the kind of form of model you expect, is important for efficient progress.

    About the high energy limit (low complexity limit) I think that a key to understand WHY the laws of microphysics are what they are, why the parameters are so tuned, why there is 3 space dimensions etc, needs an evolutionary perspective, seen from the inside. Instead of asking how we see particle collisions in a detector, ask the hypothetical question how one of the participants sees it. How does a particle actually beeing fired to a target inside an accelerator see the situation? Could it even be that, if we could glimpse into that view, the logic of the laws of microphysics and QM would be more clear. I think so. But I also think that such extreme inside view, requires the more radical view that I advocate. What I would be after are what we might call the "naked interactions", as witness from the inside, rather than through accelerator data.

    So I still think the laws of the high energy limit, does depend on the scale of observeration. And I don't like the renormalization stuff. Instead I think the better way is to scale the process by which the laws are emergent on the right scale. Gooing between two scales of observeation is no way as simple as renormalisation.

    I think renormalisation as we usually do it, should be better integrated with the laws, and the scaling process is much more involved than renormalisation admits. The proper perspective would IMHO involve the physical process by which an observer gains or looses mass.

    All of these ideas, fit nicely into the evolutionary perspective. So I have several reasons to have confidence in it. It's not a random opinon of mine.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook