Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Limits of Machine Learning?

  1. Jul 27, 2017 #1
    From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.

    But for any inference that is going to deal with ideas of causality, it's primarily a subject matter concern, which relies on mostly on judgment calls and intuition.

    So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?

    Here's an example where there might be issues.

    Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

    But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
     
    Last edited: Jul 28, 2017
  2. jcsd
  3. Jul 28, 2017 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    Why do you think human level intelligence and intuition is capable of doing a proper causal analysis?

    Human level intelligence hasn't reached a consensus about the definition of causality yet. If a "proper casual analysis" is concept known only to particular person's intuition then I agree that it takes a human being to know such a thing.
     
  4. Jul 28, 2017 #3
    Humans can't do causal analysis perfectly, that's true. But we do have a better idea of what causality is, even if it's not perfectly defined. Also, humans narrow things down much better through deductive reasoning. In the example I gave, the algorithm wouldn't be able to narrow down what those latent variables are, simply because they might have been considered in the first place, and hence are not in the data set. A human analyst would think, "Ah ha! since SES is associated with diabetes, maybe low SES causes something( e.g stress) that leads to diabetes, so hindsight shows maybe we should collect data from that". So the results leads to new insights and avenues of investigation that was never thought of before. Essentially it takes detective work to do causal inference.

    But if there already is data on every possible thing about diabetics(DNA, all biochemicals etc), and advanced learning algorithms that stably run models on millions of variables, then it is conceivable that an ML algorithm can get the answer blindly(or at least with subhuman intelligence) in one go without logical deduction. I'm not sure if this is mathematically possible, but if it is, then they beat humans at causal analysis.
     
  5. Jul 28, 2017 #4

    Dale

    Staff: Mentor

    So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
     
  6. Jul 28, 2017 #5
    True. One of the biggest hinderances to AI is pattern recognition, in which they can do only in very well controlled settings. The fact that they can't switch tasks well implies that machine's intuition about things are basically non existent. However, they are phenomenal at rapid calculation, which means they can do conceptually easy but extensive tasks.
     
  7. Jul 29, 2017 #6

    Merlin3189

    User Avatar
    Gold Member

    I'm not sure whether your question / statement relates to AI we have now, or to what can eventually be achieved. I agree that what we have now is very limited, but I believe that someone can eventually build AI that will match the best human brains. I suspect that AI will be able exceed HI, simply because it can already beat us in some tasks, so just add those to the HI skills when it acquires them. (Though that is rather like us using computers, so maybe it still counts as just our equal.)

    My reason is simply that I am surrounded by machines doing all the things that AI can't do. For me the main goal of AI is not to replace these HI machines, but to understand how they work.

    As you say, the sort of thinking you esteem - intuition, logic(?), judgement, deduction, experience, guesswork, prejudice, (I'm extending your list a bit!) , etc - may be outside the reach of current AI. So how are these machines (the humans) doing it? What is it that they can do, in concrete definable terms, that we haven't yet put into AI? Either we say, that is unknowable and psychologists are wasting their time, or our understanding of psychology will grow and we will incorporate it into AI.

    If one believes in some magical ether in the human brain - gods, human spirit, animus, life, ... ? - then obviously only machines endowed with this stuff can do these ill-defined things. Otherwise, what is the reason, other than we don't know what they are, that we can't incorporate these skills into AI machines?

    This is a psychological perspective and I think most people in AI are more in the engineering camp. So I expect AI to continue to get better in specialised tasks, using algorithms not particularly related to HI. Progress in HI may (?) usefully help get us over some of the bumps, but will we be that keen on AI systems when they start to display the same faults as HI systems? If driverless cars did get as good as human driven ones, we'd still accept human error as, well, human, but computer error is another matter. How much better than HI will AI need to become? .
     
  8. Jul 29, 2017 #7
    Whether humans will be able to create these type of thinking will likely depend on the actual complexity of those tasks in comparison to current tasks executable by AI. For example, feeling an emotion might seem easier than computing a complicated integral to a human, but it's just the opposite. Computing an integral is just the adding up of many smaller parts, few concepts needed. But an "emotion" or gut feel intuition could have much more rich and complex mathematical algorithms with many interrelated concepts that we have not even thought of yet. It's possible that such ideas are so mathematically complex that even the smartest AI scientist/mathematician would never deduce the patterns, even though the patterns are happening in physical spacetime inside a biological machine. If all this is true, then I don't know if humans will ever figure this out because the upper limit of human brain capacity is evolutionarily limited by the size of the birth canal and we probably need a mind far greater than Einstein to really understand consciousness.

    For simple repetitive tasks or tasks requiring simple low level concepts, AI will likely surpass humans at all of these, given enough training data.
     
  9. Jul 29, 2017 #8

    Merlin3189

    User Avatar
    Gold Member

    Yes, that is a worry. It may be like turbulence: we'll get some ideas about it, extract some general principles, but maybe never get on top of the detail.
    My own feeling about the brain is that it's basic elements are really quite simple, but like the molecules of a fluid, when you get enough of them involved, even simple deterministic properties can lead to fundamentally unpredictable behaviour.
     
  10. Jul 29, 2017 #9

    atyy

    User Avatar
    Science Advisor

    I agree that machines have a long way to go before reaching human level performance. But is it true that they have access to the same data as humans? For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up. Similarly, the toddler is able to crawl about and interact in the real world, which is a means of data acquisition the computers don't have.
     
  11. Jul 29, 2017 #10

    Dale

    Staff: Mentor

    That is a good point, but I think that shows even more how amazing the human brain is at learning. It can take that general knowledge from walking and running and playing and use it to inform the ability to drive. I don't think data from walking would help a machine learn to drive.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Limits of Machine Learning?
  1. Bean machine (Replies: 6)

Loading...