Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Using AI in search of QG

  1. Apr 24, 2013 #1
    I'm somewhat awed by this paper: The emergence of complex behaviors through causal entropic forces.

    Now, I'm not a scientist or anything so bear with here:

    Shouldn't it be possible to use this approach to find solutions to Newton's equations such as solving the three-body-problem etc? And if that is possible then why not Einstein's equations? Or even equilibrium solutions between the major physics theories and thus enlisting AI in the search for a QG theory? Is it impractical or just downright impossible?
     
  2. jcsd
  3. Apr 24, 2013 #2
    It is a curious paper, but I don't see what it has to do with solving differential equations or searching for new physics. The fundamental physics is programmed in to their simulations and is unalterable; it looks like the complex behaviours emerge because they give some element of the simulation "autonomy" along with instructions to behave in whatever way maximises the expected future entropy of the rest of the system, or some such thing. But I only read the news article, not their original paper, so maybe there is more to it :).
     
  4. Apr 24, 2013 #3
    I may very well be on a wild goose chase here, but on an intuitive level I feel there is something to be had. Now I know that intuition is a bad ally for doing science but bear with me here once again....:

    If you watch the video that accompanies the paper from the phys.org article I mentioned, their approach can play Pong very well and what is Pong other than an application of Newton's equations? I mean "Pong" is a simplification of table tennis and what equations do you use to model a game of table tennis?

    http://tabletennis.about.com/od/beginnersguide/a/physics_mathsTT.htm

    My point is that if their approach to AI can solve such a task optimally why not solve other, similar tasks, with the same approach?
     
  5. Apr 24, 2013 #4
    Well, sure, the pong thing is a curious kind of constrained system with "magic" forces applied by the entropy-maximising agent. But the idea seems to be that the agents themselves simulate what effect their actions will have on the future evolution of the system, then choose the appropriate action to maximise the number of states are accessible to them. This is wrapped up in their language about entropic forces and such, but as far as I can tell this seems to be what is happening. So it isn't a method for discovering solutions to (in this case) Newton's laws -- indeed the full space of solutions needs to be already known in order to compute this "entropic force", i.e. to pick the next action of the agent (it involves a path integral over the configuration space of the system).

    So it seems like a general method for generating interesting behaviours in all kinds of systems, but not for discovering the laws which govern those systems in the first place. Those need to be known to start with. If you want some of these agents to play pong around a black hole it seems like this general framework could handle it.

    They make some throwaway comments about how all this could somehow be useful for entropic gravity (ala Verlinde), but I have no idea what they mean by that. I suspect it may just be hype.
     
    Last edited: Apr 24, 2013
  6. Apr 24, 2013 #5
    Darn. The idea just looked so beautiful. I hope at least the paper will have some implications for AI in the long run then. Thanks for shooting my hopes down. ;)
     
  7. Apr 24, 2013 #6
    I expected my "brilliant" idea to work because on the intarwebs we are all equal.
     
  8. May 27, 2013 #7
    sbrothy,

    I am sort of awed by the paper too but maybe I am reading too much into it like you did.

    I do think it would have some applicability to AI but I was looking at it more as physical explanation of life and biological intelligence. Or perhaps even explaining the fine tuning of the universe.

    If the universe is "maximizing the overall diversity of accessible future paths of the world" then we would have at least a beginning on a overriding principle why life and why our intelligence came about. DNA might be itself be the foundation capturing information and “maximizing future histories” on long time scales and what we generally think of as intelligence (planning, rational behavior, etc) as “maximizing” on a real time scales. Culture and collective knowledge extend the real time intelligence over generations. We would have a connecting path from thermodynamics to life and intelligence.
     
  9. May 27, 2013 #8

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    If I understand the paper correctly, you would have to propose that the universe is driven by some external agent to get this.
    I don't think this is a good idea.
     
  10. May 27, 2013 #9
    I think this might be implying that intelligence is not what it seems. In other words, intelligence does not require an external conscious agent and that the conscious agent (us) might not actually be as conscious as it believes.

    For example, look at this:

    http://www.nature.com/news/how-brainless-slime-molds-redefine-intelligence-1.11811

    Isn't this the same sort of "intelligent" behavior in the article?
     
  11. May 27, 2013 #10
    Just to add one thing. In the study, of course, the logic of behavior is supplied by an external agent. It is a computer program/simulation. However, that does not mean there must be an external agent. All it is saying is that strategies that "maximize the overall diversity of accessible future paths of the world" show behavior that appears to be intelligent.

    You may be reading too much into it yourself if you think an external agent is required.

    Of course, how or why this maximizing strategy might occur is another question which is why I say this might be a beginning on a explaining why life and why our intelligence came about.
     
  12. May 27, 2013 #11

    MTd2

    User Avatar
    Gold Member

  13. May 28, 2013 #12
    MTd2

    Thanks. I had already read the article. Some of the comments on the criticism are interesting.
     
  14. May 28, 2013 #13

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    I get the impression that Gary Marcus (at newyorker.com) gets two important points wrong:
    The paper does not claim that
    - all inanimate objects would try to maximize the options for future change
    or
    - all intelligent behavior maximizes the options for future change in every subsystem (like keeping the choice between two fruits)
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook