1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Energy perspective on deterministic intelligence

  1. Jul 19, 2017 #1
    This question is kind of out there, so I'm not sure where to put it. But it is physics, somewhat, under the fact that information itself contain energy.

    So, is it physically possible for a computer program running off of logical statements alone to be as intelligent as a human? Because this takes energy, perhaps a lot of it.

    I ask this question because current artificial intelligence, as primitive as they are, has not been designed by solely logical if/else statements. They work around this problem by using data from the real world, (billions to trillions of bits) just to train their program, using probabilistic models , to do a conceptually simple task such as recognizing faces.

    This is in some sense less deterministic than just code running on if statements, at least subjectively. If it could have been run on a bunch of if statements, it could take so much code( i.e if statements for every situation/combo of situations=> infinite possibilites that need to be accounted for) that it would be impossible to cram it down to the size of the human brain without it creating a blackhole. Could this be the case? If so, then deterministic intelligence is impossible.

    AI could require too much data for it too handle if it were to do any meaningful in terms of intelligence. So if it requires so much data just to qualify as being sentient and intelligence, that the density in information would collapse into a black hole before reaching this point, then this puts a restriction on not only deterministic intelligence, but our ability to create it with current science.

    Has this issue ever been addressed seriously in physics?
    Last edited: Jul 19, 2017
  2. jcsd
  3. Jul 19, 2017 #2


    User Avatar
    Gold Member

    Do the silicon chips ( hardware ) work on some other principle other than AND, NOR, OR, NOT decision making.
  4. Jul 19, 2017 #3
    The human brain consumes about 100 Watt. Physically there's nothing particularly interesting about the brain, other than that it has myriad of connections we don't fully know about yet.
    In terms of emulation, modern speech recognition exceeds human performance actually. Hard to argue about a "special sauce" in the brain if that is already possible.
  5. Jul 19, 2017 #4
    If we try to write a completely deterministic chess program on an idealized computer using only logical statements for every possibility, it would turn into a blackhole since there are more number of moves than the number of quarks in the universe. See Shannon Number. The amount of bits is too much to store.

    Human brains are even more complex. Far more.

    Storing that much "code" that makes us run the way we do is staggering. There needs to be some resolution to this, stemming from physics.
    Last edited: Jul 19, 2017
  6. Jul 19, 2017 #5
    Stunning that those things only cost $20 at Amazon!

    I'm checking out of this discussion, sorry. This has nothing to do with physics or science, this is Ray Kurzweil-esque talk of mental singularities and black holes.
  7. Jul 19, 2017 #6
    Hmm, well, how many bits can fit in a 1ft^3 volume in space? I'm sure if we keep dumping info in that region of space, there is a point where it would become a black hole since information cost energy. This is standard physics and is well known. The question boils down to if you can fit at least a Shannon Number( 10^123) of bits in a region that small without causing serious gravitational effects.

    I may have tried to extrapolate/speculate a bit too much and I apologize about that. The basic idea: too much info results in blackholes. How is that not science?
    Last edited: Jul 19, 2017
  8. Jul 19, 2017 #7
    My snide Amazon comment was actually a hint at your error of thinking. You are confusing programs with the state space they cover. Sure, the state space of chess is enormous and would create a black hole if described in a confined space. But neither chess program nor brains describe the state space. They describe a way to *operate* on the state space. Huge difference, orders of magnitudes, and also the solution to your conundrum. Brains essentially contain programs, not state spaces.
  9. Jul 19, 2017 #8
    By the way, speech recon
    I know. Current algorithms use some variants Markov Chains( in conjunction of other mathematical techniques such as monte carlo and/or optimization etc) to traverse the state space without much problems since they are training from data obtained from the outside world, which is why a separate conditional isn't needed to account for every single combination.

    But can the brain's, current iteration though the state space house all the possibilities though? Say that the brain is run off of rewritable code such that the code a second ago is erased(freed from memory) and is replaced by a new current one. Even though this would save energy, within that current program, at a particular instance in time, I would imagine the code still needs to be extremely long and complex, just to handle the many nearly infinite possibilities delivered from the outside world in the next instant in time.
  10. Jul 19, 2017 #9
    The question you probably need to ask yourself is, why do you weirdly bracket out the brain as this "super-physical" machine? The human brain is severely limited in its abilities, but somehow you elevate it into a different tier than speech recognition's neural nets, or Deep Blue's chess programs. That's belief, not science.
  11. Jul 19, 2017 #10
    Well, in order for to get a AI program to recognize a face, you need huge expensive computers, large nets and trillions of bits of training data. This is just to recognize the face. Not even taking into account the fact the current AI's cannot even get a meaningful sense of what the face it saw. It can link the name to the face. But humans link names, emotional characteristics, opinions, and probably a bunch of subconcious processes that aren't even known yet. So it makes sense that humans are a different tier.

    Currently, AI's are still less complex than an earthworm.

    The human brain is limited in its ability to compute things extremely fast, hence it cannot hope to beat a computer at, say arithmetic. But try getting a computer to feel emotions. I will say that such a code would be very complex. Much more than something as conceptually simple as arithmetic.
  12. Jul 19, 2017 #11
    No, but it uses in addition probabilistic models so that it doesn't have to exhaust all the possibilities. I'm saying that some probability needs to be involved on top of having logical conditionals. You cannot make a good AI from just logical conditionals without accounting for the probabilistic nature of the data it needs to train on.
  13. Jul 20, 2017 #12


    User Avatar
    Science Advisor

    Brains are specialized hardware, while computers are general purpose hardware, which can emulate specialized hardware like brains. That emulation limits performance. Building more specialized hardware is a question of engineering and economics, but not prohibited by physics in general.
  14. Jul 20, 2017 #13
    One of the interesting effects that happens when people compare human to machine performance, is that, probably because people feel threatened, it is an ever-moving goalpost. It is without doubt that in 1770, when von Kempelen showed his Mechanical Turk, had someone shown around an Amazon Echo, the public would have concluded intelligence inside the device.

    But, because the introduction of these technologies has been so gradual, people have learned how to spot the subtle errors made, and that then becomes the new goalpost to overcome. "It didn't understand me in the loud bar" (even though many humans don't manage either), "I can hear the intonation is slightly off" (but apparently a thick human accent is fine).
    Also, because we built these machines, we know how they work, and that demystifies them. If one were to now equate that to the brain, it means humans are just biological machines, and a lot of people have a hard time with that thought. It's probably the 21-century equivalent of the introduction of the heliocentric universe, where something you previously held very dearly now no longer occupies the top spot.
  15. Jul 20, 2017 #14


    User Avatar

    Staff: Mentor

    This just plain isn't true. Humans do not play chess that way; they do not calculate all possible moves when playing. Not even close. What is key for humans (and being worked on with computers) is eliminating avenues of investigation that don't lead anywhere useful. Humans are much better at that than computers, but since chess has a relatively low level of complexity, you can skip that with a computer and just beat the human mostly on brute force.
    Last edited: Jul 20, 2017
  16. Jul 20, 2017 #15


    User Avatar
    2017 Award

    Staff: Mentor

    About (1ft/(planck length))2, or 1068.

    To play chess optimally, you need an insane amount of computation time, but you just need a few kilobytes of memory. But no one does that.
    To play chess better than every human, a home computer is sufficient. Even a Raspberry Pi can play at Grandmaster strength, and requires less power than a human brain.

    In addition, computers make rapid progress. In 1997, it took a dedicated computer cluster to beat the world champion. In 2006 a simple home computer beat the world champion. Today humans are trivial to defeat for programs running on home computers.

    With Go, progress was even faster. In March 2016, AlphaGo, using a computer cluster, beat the first 9-dan Go player, 4:1. Just a year later, the program beat the world champion 3:0 - while running on a single computer. It also beat a team of 5 professional Go players playing together, and played 60:0 against a variety of other professional players.
    Computers went from "with a lot of computing power the program has a chance to win against professional players" to "with moderate computing power it will win every single game against every human" in a single year.
  17. Jul 20, 2017 #16
    As a side note, I also *hate* the statement "computers can't do emotions". As a big Star Trek fan, it drove me nuts whenever they used that as a plot line.
    Emotions are actually pretty damn simple to emulate.
  18. Jul 20, 2017 #17
    I know that computers eliminate avenues it hasn't investigated. But, to my knowledge, it works on prior information or some information one the current form of the data.

    Deterministic algorithms like dijkstra's or prims has to traverse each node to pick the best path, which is the surefire way to know that a path is best, on fixed structures that are unchanging. These are exhaustive. Search algorithms can eliminate paths such that the data structures do not need to be completely investigated, but it requires the data structure to be reorganized in such a way that it is possible( e.g binary search only works when the vector is sorted initially or we investatigated the data to observe for ascending). But when the search paths are constantly changing in time, saying during chess when the best move depends on the opponents move, then the paths become far to astronomical in number.

    You can get around this issue by programming the AI to run on probability algorithms/and or optimization algorithms using data from the real world. But this leads to two problems with regarding the AI as being deterministic. One, data from the world is random and as such, the AI cannot be deterministic as it depends too much on the outside world. Also, from a mathematical perspective, if optimization is used, the likelihood function stemming from the multidimensional data can be maximized, giving the path of highest likelihood, using something like Newtons method, but we all know that newtons method depends highly initial conditions and may never converge if the guess is bad. So this is also another random element that makes the program not so deterministic. The likelihood function's shape also varies from one data set to the next, making it infeasible to have a mathematical catch all method that works for every situation, within the same problem. This is not even considering what happens when the problem itself is changed, say from chess to cooking.

    Basically, an AI is not deterministic because it actually relies on stochastic data from the outside world so that it can make the choices with good probability of success based on it's experience, i.e data.

    Human brains might or might not work in this way. If it does, then it is definitely not deterministic. Also, the complexity of the human brain's program for descision making, even if it is guided by data from the external world, should be extremely high. Things are not binary for people. Say at a particular point in time, a person wants to eat food. It's not a simple command such as if (hungry){ eat food} . If someone is hungry, they will eat food, but what they eat depends on so many factors, the location they are at, their mood up to that point, whether they have a meeting with a friend to eat, etc.., which is basically past data.
    Since we are considering time fixed at the present, the person's decision in the future would be based off of data from the past and observed data from the present. But this boundary between the present and the future is huge because the person would need to be able to make a choice in any of the vast amount of possibilities that occur in the present. They can make a choice even in completely unexpected occurrences in the present because its not like people shut down and report "error"
  19. Jul 20, 2017 #18
    It's not that computers can't do emotions. They can, if it runs on the right program. Humans have emotions, and we are basically gears running off of the laws of science. However, I don't think emotions are simple. A computer program can be taught 1+1=2. This is because math is low level logic. If an emotion is to be converted to it mathematical form, then you would find it extremely complex.

    Graphical characteristics such as color, shape, volume, are more complex than arithmetic and takes more rich logic to describe in programming from first principles than the simple adding of numbers. Emotions are even higher than this in terms of mathematical complexity.
  20. Jul 20, 2017 #19


    User Avatar
    2017 Award

    Staff: Mentor

    Alpha-beta pruning leads to the optimal result without traversing the full search tree, for example.
    That doesn't make sense. I'm not sure you understand what "deterministic" means. It means if you run the AI program again, you'll get the same result. For an AI that is supposed to give the best result that is certainly what you want.
    Then don't use Newton if that is an issue.
    "You can find a bad algorithm" doesn't rule out the existence of good algorithms.
    Every part of it follows deterministic laws of physics. Fundamentally it is deterministic.
    So what? That is true for every relevant decision, both in animals and computers.
  21. Jul 20, 2017 #20
    I guess what I mean is that since AI uses data from the real world to learn, the characteristic of the intelligence depends on that data. Maybe I was using the term deterministic too loosely. Basically, I'm saying that you can't build an AI to have the exact mind you intend it to have because that mind depends on outside data, even if it produces good results. The external data has elements of randomness, but the AI uses to solves the given task , which means that the AI matches that probabilistic structure, unknown as it may be. That also means that to solve problems, the AI would have to evolve based on external input, meaning unexpected and unintended characteristics could arise in the search for solutions.

    And to make a AI have the results without any negative consequences requires much more specifications as all those negative results would need to be laid out. If we don't lay it out due to us humans not having imagined it, then negative consequences that we didn't even know were negative could happen. Unless there is some sort of intuition built in the AI, which I imagine has to be extremely difficult to do.

    I do realize that it does follow the deterministic laws of physics.

    Yes, but the complexity goes way up for animals compared to current AI.
    Last edited: Jul 20, 2017
  22. Jul 20, 2017 #21


    User Avatar
    2017 Award

    Staff: Mentor

    Sure. That applies to humans as well.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted