What are the Limitations of Machine Learning in Causal Analysis?

In summary, machine learning is able to make accurate predictions from data in an automated and algorithmic way, but is not capable of making causal inferences due to the reliance on human judgment and intuition. Human level intelligence and intuition is necessary for proper causal analysis, as humans have a better understanding of what causality is and can use deductive reasoning to narrow down potential causal factors. While machines may eventually be able to surpass human intelligence, they currently struggle with tasks that require intuition and pattern recognition. The ultimate goal of AI is not to replace human intelligence, but to better understand how humans think and incorporate this understanding into machines.
  • #1
FallenApple
566
61
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.

But for any inference that is going to deal with ideas of causality, it's primarily a subject matter concern, which relies on mostly on judgment calls and intuition.

So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?

Here's an example where there might be issues.

Say, an ML algorithm finds that low socioeconomic status is associated with diabetes with a significant p value. We clearly know that diabetes is a biological phenomena and that any possible(this is a big if) causal connection between a non biological variable such as low SES and diabetes must logically have intermediate steps between the two variables within the causal chain. It is these unknown intermediate steps that probably should be investigated in follow up studies. We logically know(or intuit from prior knowledge+domain knowledge) that low SES could lead to higher stress or unhealthy diet, which are biological. So a significant pval for SES indicates that maybe we should collect data on those missing variables, and then redo the analysis with those in the model.

But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
 
Last edited:
  • Like
Likes Demystifier
Technology news on Phys.org
  • #2
FallenApple said:
So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?

Why do you think human level intelligence and intuition is capable of doing a proper causal analysis?

Human level intelligence hasn't reached a consensus about the definition of causality yet. If a "proper casual analysis" is concept known only to particular person's intuition then I agree that it takes a human being to know such a thing.
 
  • Like
Likes timeuntotime, Dr. Courtney, Merlin3189 and 1 other person
  • #3
Stephen Tashi said:
Why do you think human level intelligence and intuition is capable of doing a proper causal analysis?

Human level intelligence hasn't reached a consensus about the definition of causality yet. If a "proper casual analysis" is concept known only to particular person's intuition then I agree that it takes a human being to know such a thing.

Humans can't do causal analysis perfectly, that's true. But we do have a better idea of what causality is, even if it's not perfectly defined. Also, humans narrow things down much better through deductive reasoning. In the example I gave, the algorithm wouldn't be able to narrow down what those latent variables are, simply because they might have been considered in the first place, and hence are not in the data set. A human analyst would think, "Ah ha! since SES is associated with diabetes, maybe low SES causes something( e.g stress) that leads to diabetes, so hindsight shows maybe we should collect data from that". So the results leads to new insights and avenues of investigation that was never thought of before. Essentially it takes detective work to do causal inference.

But if there already is data on every possible thing about diabetics(DNA, all biochemicals etc), and advanced learning algorithms that stably run models on millions of variables, then it is conceivable that an ML algorithm can get the answer blindly(or at least with subhuman intelligence) in one go without logical deduction. I'm not sure if this is mathematically possible, but if it is, then they beat humans at causal analysis.
 
  • Like
Likes Demystifier and atyy
  • #4
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.
 
  • Like
Likes Klystron, Auto-Didact, Demystifier and 1 other person
  • #5
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.

True. One of the biggest hinderances to AI is pattern recognition, in which they can do only in very well controlled settings. The fact that they can't switch tasks well implies that machine's intuition about things are basically non existent. However, they are phenomenal at rapid calculation, which means they can do conceptually easy but extensive tasks.
 
  • #6
I'm not sure whether your question / statement relates to AI we have now, or to what can eventually be achieved. I agree that what we have now is very limited, but I believe that someone can eventually build AI that will match the best human brains. I suspect that AI will be able exceed HI, simply because it can already beat us in some tasks, so just add those to the HI skills when it acquires them. (Though that is rather like us using computers, so maybe it still counts as just our equal.)

My reason is simply that I am surrounded by machines doing all the things that AI can't do. For me the main goal of AI is not to replace these HI machines, but to understand how they work.

As you say, the sort of thinking you esteem - intuition, logic(?), judgement, deduction, experience, guesswork, prejudice, (I'm extending your list a bit!) , etc - may be outside the reach of current AI. So how are these machines (the humans) doing it? What is it that they can do, in concrete definable terms, that we haven't yet put into AI? Either we say, that is unknowable and psychologists are wasting their time, or our understanding of psychology will grow and we will incorporate it into AI.

If one believes in some magical ether in the human brain - gods, human spirit, animus, life, ... ? - then obviously only machines endowed with this stuff can do these ill-defined things. Otherwise, what is the reason, other than we don't know what they are, that we can't incorporate these skills into AI machines?

This is a psychological perspective and I think most people in AI are more in the engineering camp. So I expect AI to continue to get better in specialised tasks, using algorithms not particularly related to HI. Progress in HI may (?) usefully help get us over some of the bumps, but will we be that keen on AI systems when they start to display the same faults as HI systems? If driverless cars did get as good as human driven ones, we'd still accept human error as, well, human, but computer error is another matter. How much better than HI will AI need to become? .
 
  • Like
Likes jackwhirl
  • #7
Merlin3189 said:
As you say, the sort of thinking you esteem - intuition, logic(?), judgement, deduction, experience, guesswork, prejudice, (I'm extending your list a bit!) , etc - may be outside the reach of current AI. So how are these machines (the humans) doing it? What is it that they can do, in concrete definable terms, that we haven't yet put into AI? Either we say, that is unknowable and psychologists are wasting their time, or our understanding of psychology will grow and we will incorporate it into AI.

Whether humans will be able to create these type of thinking will likely depend on the actual complexity of those tasks in comparison to current tasks executable by AI. For example, feeling an emotion might seem easier than computing a complicated integral to a human, but it's just the opposite. Computing an integral is just the adding up of many smaller parts, few concepts needed. But an "emotion" or gut feel intuition could have much more rich and complex mathematical algorithms with many interrelated concepts that we have not even thought of yet. It's possible that such ideas are so mathematically complex that even the smartest AI scientist/mathematician would never deduce the patterns, even though the patterns are happening in physical spacetime inside a biological machine. If all this is true, then I don't know if humans will ever figure this out because the upper limit of human brain capacity is evolutionarily limited by the size of the birth canal and we probably need a mind far greater than Einstein to really understand consciousness.

For simple repetitive tasks or tasks requiring simple low level concepts, AI will likely surpass humans at all of these, given enough training data.
 
  • Like
Likes Demystifier and Auto-Didact
  • #8
FallenApple said:
It's possible that such ideas are so mathematically complex that even the smartest AI scientist/mathematician would never deduce the patterns,
Yes, that is a worry. It may be like turbulence: we'll get some ideas about it, extract some general principles, but maybe never get on top of the detail.
My own feeling about the brain is that it's basic elements are really quite simple, but like the molecules of a fluid, when you get enough of them involved, even simple deterministic properties can lead to fundamentally unpredictable behaviour.
 
  • #9
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average. Similarly, even a below average human toddler can learn to speak any language with far less data than is available to computers attempting the same task.

I agree that machines have a long way to go before reaching human level performance. But is it true that they have access to the same data as humans? For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up. Similarly, the toddler is able to crawl about and interact in the real world, which is a means of data acquisition the computers don't have.
 
  • #10
atyy said:
For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up
That is a good point, but I think that shows even more how amazing the human brain is at learning. It can take that general knowledge from walking and running and playing and use it to inform the ability to drive. I don't think data from walking would help a machine learn to drive.
 
  • #11
Dale said:
The machines that are learning the same task have millions of hours and are far from average.
I am probably missing something trivial, but millions of hours means hundreds of years. But we do not have such machines for that long. So how do they learn?
 
  • #12
atyy said:
I agree that machines have a long way to go before reaching human level performance. But is it true that they have access to the same data as humans? For example, in addition to the 60 hours of experience a teenager needs to learn to drive, that teen already spent 16 years acquiring other sorts of data while growing up. Similarly, the toddler is able to crawl about and interact in the real world, which is a means of data acquisition the computers don't have.
So perhaps we need an AI kindergarten, as proposed by my brother:
https://www.linkedin.com/pulse/ai-kindergarten-what-does-take-build-truly-machine-danko-nikolic
 
  • Like
Likes Auto-Didact and stoomart
  • #13
I don't know much about this topic, but this is partly related and also somewhat amusing (this is quite a recent video):



The relevant part starts around 5 minutes or so. Though I think people mostly tend to think of programs versus humans only in the context of strategy games (talking about video games).

For arcade games, for example, there are already easy TAS (tool assisted) runs for lots of games. But they are hardly any fun to watch at all (except to see the limits) --- as compared to human replays/videos. Because the fun part is in experience of hand-eye coordination, visual cognition, mechanical perfection etc. Judgement is just one part of playing.

Actually something similar will apply to FPS and many other more action related genres.

In strategy games judgement seems to play a bigger part (as compared to other factors) so it is more amusing to see a program playing very well. And also fog of war (in RTS or derivative genres) tends to add a large element of imperfect information (and it is fun to see how a program handles that).

Demystifier said:
I am probably missing something trivial, but millions of hours means hundreds of years. But we do not have such machines for that long. So how do they learn?
I think (just from a layman perspective) that's probably because of raw computation power they can replay the same scenarios over and over in a very short period of time.
 
  • #14
FallenApple said:
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.
But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?

I think you're missing the point of how machine learning is being increasingly done today. Many (perhaps most) machine learning tools today are not algorithmic in nature. They use neural networks configured in ways similar to the human brain, and then train these networks with learning sets, just like a human is trained to recognize patterns. Even the (human) designer of the neural network doesn't know how the machine will respond to a given situation. Given this, I don't see why these artificial neural networks cannot match or eventually exceed human capability. Indeed, I think Google's facial recognition software is already exceeding human capability. Granted this is in a controlled environment, but given time and increasing complexity of the networks (and increasing input from the environment), I think you will see these machines able to do anything a human mind can do.
 
  • Like
Likes BWV and FactChecker
  • #16
Hi there,
Here is Danko. Hello everyone. Writing an article is maybe a bit too much for me right now. But I would be glad to answer questions. Here is two comments to what has been said before:
- Does AI have access to the same data as humans? In my opinion, at one important level the answer is NO. This is the knowledge we have stored in our genes. We should think about genes as a small but very extensively trained (millions if not billions of years) machine learning component that assists every toddler's learning. Without having in genes knowledge on what to learn and how to learn (we usually refer to those as instincts), a toddler could not do any of its intelligence magic. And this is the key problem: how to provide for an AI this millions-of-years-experience wisdom that we are born with? How do we provide an AI with the data that our ancestors used throughout evolution to get us the genes that we have? (not to mention the computational power needed to work AIs way through these data.)

- Can today's artificial neural networks eventually match or exceed human capabilities? I have written an article explicitly dealing with that problem: According to my calculation, the answer is NO. A good news is that I also propose organisation of AI that possibly could do that. One can download the paper here:

http://www.ijac.net/EN/article/downloadArticleFile.do?attachType=PDF&id=1958

I know that the paper is technical and scientific and that people would prefer a digest. Maybe you can just take a look at the abstract.

I hope that this is useful.

Danko
 
  • Like
Likes timeuntotime, stoomart, StoneTemplePython and 2 others
  • #17
Auto-Didact said:
Can you perhaps get him to come here and write an Insight article or something? :)
As you can see above, I just did it. :smile:
 
  • Like
Likes Auto-Didact
  • #18
Demystifier said:
As you can see above, I just did it. :smile:

Today is like Christmas! This is almost as exciting as was meeting Roger Penrose in person last year :D

Danko Nikolic said:
Hi there,
Here is Danko. Hello everyone. Writing an article is maybe a bit too much for me right now. But I would be glad to answer questions. Here is two comments to what has been said before:
- Does AI have access to the same data as humans? In my opinion, at one important level the answer is NO. This is the knowledge we have stored in our genes. We should think about genes as a small but very extensively trained (millions if not billions of years) machine learning component that assists every toddler's learning. Without having in genes knowledge on what to learn and how to learn (we usually refer to those as instincts), a toddler could not do any of its intelligence magic. And this is the key problem: how to provide for an AI this millions-of-years-experience wisdom that we are born with? How do we provide an AI with the data that our ancestors used throughout evolution to get us the genes that we have? (not to mention the computational power needed to work AIs way through these data.)

- Can today's artificial neural networks eventually match or exceed human capabilities? I have written an article explicitly dealing with that problem: According to my calculation, the answer is NO. A good news is that I also propose organisation of AI that possibly could do that. One can download the paper here:

http://www.ijac.net/EN/article/downloadArticleFile.do?attachType=PDF&id=1958

I know that the paper is technical and scientific and that people would prefer a digest. Maybe you can just take a look at the abstract.

I hope that this is useful.

Danko

Honoured to make your acquaintance. I'm at work currently so I cannot spend too much time reading the paper, but I will do so asap.

In the meantime I was hoping that you could elaborate on the dynamical systems description of practopoiesis, specifically that thinking was akin to changing the parameters of this system, and how new thoughts occur during phase transitions i.e. during bifurcation of this system.

Did you happen to have some specific equations and parameters in mind and how would these would be changed physically? And for humans/animals, should we be thinking about these as simple attractors detectable through analysis or more like some high dimensional attractors, perhaps akin to some Kuramoto type network model?
 
  • #19
In the meantime I was hoping that you could elaborate on the dynamical systems description of practopoiesis, specifically that thinking was akin to changing the parameters of this system, and how new thoughts occur during phase transitions i.e. during bifurcation of this system.

Did you happen to have some specific equations and parameters in mind and how would these would be changed physically? And for humans/animals, should we be thinking about these as simple attractors detectable through analysis or more like some high dimensional attractors, perhaps akin to some Kuramoto type network model?

I haven't made an interpretation based on dynamical systems. One could but I was never sure that this would be particularly insightful. Maybe it would, but one would have to try first. Instead, I focused on cybernetic/control-theory interpretation.

A dynamical system would need to be described by stochastic differential equations.

Still, intuitively, an interpretation of practopoietic hierarchy (traverses) based on dynamical systems would be quite simple to understand I think. There is nothing especially complicated about it -- at least not in principle. You just need to imagine two dynamical systems, one that operates fast (F, say updated every second) and one that operates slowly (S, say updated every five hours). Now we need the following conditions:
- the value of at least one parameter of F is being decided/adjusted by S. But S cannot affect the dynamics of F in any other way.
- In contrast, F cannot affect the parameters of S, but the accumulated results of the dynamics of F become a part of the dynamics of S. Thus, the dynamics of F affect the dynamics of S.

This is all.

The two are sort of asymmetrically coupled: In one direction they interact through parameters of a dynamical system, S -> F; in the other direction they interact through dynamics: F -> S. This results in practopoietic loop of causation: http://www.danko-nikolic.com/practopoietic-cycle-loop-of-causation/

It may be difficult to make a mental click to understand what I am talking about. But once the click occurs, it is very easy to think about these systems. There is no immense complexity that often occurs with dynamical systems. This is precisely because the two operate at different speeds. So, whenever you think about the fast one, you can neglect the operations of the slow one, and when you think about the slow one, you can approximate the operations of the fast one with some simple function (mean + noise).

Kuramoto types of networks are in my understanding not particularly relevant here.

Their interaction is such that S induces bifurcations of F, but not vice versa. In contrast, the dynamics of F accumulated over time, makes a part of the dynamics of S.

Critical is that S has knowledge on when and in which direction to change the parameters of F. And to discuss that further, we have to define certain "goals" or "target values" that S and F are trying to achieve. And this leads us to attractors. We can say that S has an attractor state, much like any regulator.

As to particular equations, you can use any equations you want. This is completely unlimited for as long as they satisfy the conditions mentioned above.

I hope this is understandable.
Danko
 
  • Like
Likes Auto-Didact and Demystifier
  • #20
Is the distinction of algorithm and function of "some relevance" in this topic (which I don't know anything about) in general?

Quite simple way to describe it is suppose we have a function that takes an array as input and outputs a sorted array. The function is unique but we distinguish between various "algorithms"/"methods" for achieving this.

I am not sure that this distinction can be made fully mathematically rigorous (possibly in same way as "efficient computation" or "natural examples" etc. "surely" can't be made mathematically rigorous but perhaps could be defined in practically useful ways).

Speaking quite generically, I am thinking along the lines that while stimulus and response are important parts of interaction with environment ---- possibly the representation of information internally is also of some importance (and this seems to be related with the function/algorithm distinction).
 
  • #21
Danko Nikolic said:
A dynamical system would need to be described by stochastic differential equations.
Why stochastic? Why not deterministic?
 
  • #22
Demystifier said:
Why stochastic? Why not deterministic?

Because the interaction is between an organism and its environment. A real environment is unpredictable; you never get into an identical situation twice, and the environment never responds twice in the same way to your actions. Therefore, from the perspective of differential equations the interaction has a considerable stochastic component.
 
  • Like
Likes Auto-Didact and Demystifier
  • #23
Danko Nikolic said:
Because the interaction is between an organism and its environment. A real environment is unpredictable; you never get into an identical situation twice, and the environment never responds twice in the same way to your actions. Therefore, from the perspective of differential equations the interaction has a considerable stochastic component.
OK, but from dynamical-systems perspective, unpredictable behavior of the environment can be a result of deterministic chaos. In the end, there may be no much apparent difference between stochastic and chaotic modeling of the environment, and the former may be simpler to implement in a computer simulation, but the latter seems more realistic from the fundamental physical point of view.

Anyway, this all looks like a red herring, as I agree with you that dynamical-systems perspective is not very useful here.
 
Last edited:
  • #24
I just read the paper you linked earlier. The part about how slowly learned genetic policies enable networks themselves to gain knowledge about fast adaptation policies, causing the operation of adaptation policies to directly provide stimuli with their best interpretation, in other words an actual explanation for what 'understanding' may entail, simply blew me away.

This actually answers many long-standing philosophical questions in the philosophy of mind, including what qualia may be.
Danko Nikolic said:
I haven't made an interpretation based on dynamical systems. One could but I was never sure that this would be particularly insightful. Maybe it would, but one would have to try first. Instead, I focused on cybernetic/control-theory interpretation.

I am sure that I read somewhere a dynamical system description/metaphor of practopoiesis; this was namely what caused the click with my own (nowhere nearly as sufficiently developed) ideas about cognitive states as being represented as points in some state space.
Danko Nikolic said:
Their interaction is such that S induces bifurcations of F, but not vice versa. In contrast, the dynamics of F accumulated over time, makes a part of the dynamics of S.

Critical is that S has knowledge on when and in which direction to change the parameters of F. And to discuss that further, we have to define certain "goals" or "target values" that S and F are trying to achieve. And this leads us to attractors. We can say that S has an attractor state, much like any regulator.

The existence of such attractors are precisely why I opt for a dynamical systems description. If, as in a regular cognitive setting, many different aspects of some perceived phenomenon are to be evaluated on the same time scales, i.e. different network policies are executed in parallel, this implies that these multiple outputs together form some attractor and that similar behavior may be evoked by activating all or many of these network policies as if just a few or even one of these network policies was activated.
Demystifier said:
OK, but from dynamical-systems perspective, unpredictable behavior of the environment can be a result of deterministic chaos. In the end, there may be no much apparent difference between stochastic and chaotic modeling of the environment, and the former may be simpler to implement in a computer simulation, but the latter seems more realistic from the fundamental physical point of view.

Anyway, this all looks like a red herring, as I agree with you that dynamical-systems perspective is not very useful here.

I disagree, the usefulness of the dynamical systems perspective all depends on what a theory is aiming to explain and at what level; the perspective enables the rapid creation of experimentally checkable hypotheses which may otherwise not be appearant to check at all to those thinking directly about some naturally occurring system or using statistics to do their hypothesis testing for them. This can happen completely outside of the context of the original theory, in this case AI.

Here are some examples, 'the dynamics in the rewiring of networks into most conducive for abductive reasoning either will or will not exhibit small world characteristics'. Or, 'the equi-level synchronised activation of different network policies implies that synchronised chaos may exist across many cognitive states'. Or even, 'a sudden discontinuous increase in cognitive capacities is to be expected when comparing different species which have evolved genetic policies capable of creating small world neural networks compared with those without such policies'. Such 'insights' are far more easily generated than if one were to rely on logical deduction alone, and once envisioned they naturally raise tonnes more questions, all which definitely seem checkable in some way.

Moreover, evidence can and often already has been gained from other both top-down and/or bottom-up researchers not looking for such patterns, meaning we can rapidly falsify models in this way. We can even use the perspective to tie together many different sciences in novel ways, e.g. https://crl.ucsd.edu/~elman/Papers/dynamics/dynamics.html, directly also leading to new results in completely orthogonal directions, such as towards the subject we are actually discussing here.

As you yourself say, many spontaneous behaviors in the environment and induced by the environment on some systems need not be strictly stochastic given deterministic chaos. The nice thing is that extremely complicated but typical behavior will tend to fall on an attractor. If the goal is identifying and characterizing such possibly immensely complicated attractors, I don't see how one would do that without phase space reconstruction and/or other tools inherent to a dynamical systems perspective.

Lastly, on a more abstract level, it seems all complexity science subjects, such as cybernetics, chaos, (nonlinear) dynamical systems, network theory and so on, share an underlying mathematical backbone which is currently, as a field of mathematics, still a work in progress, perhaps one extremely relevant to physics. Many great mathematicians and physicists, both historical and contemporary (e.g. Benoit Mandelbrot, Floris Takens, John Baez, Stephen Strogatz) have made this point and I tend to agree with them on it.
 
  • #25
watch
 
  • #26
@Danko Nikolic:
I just reread your 2015 practopoiesis paper:
Regarding prediction #3, did you ever find a physiological mechanism underlying ideatheca? If not, Craddock et al. 2012 gives a specific physiological mechanism in the form of LTP-activated enzymes encoding information directly onto the neuronal cytoskeleton, i.e. CaMKII encoding information on microtubules (MTs).

Seeing after formation neuronal MTs remain stable i.e. don't depolymerize like non-neuronal MTs, information encoded on them would remain stable throughout adulthood, providing a means of stable long term memory formation which can last years or even a lifetime. Moreover, in the last few years it has become known that loss of neuronal cytoskeletal structure is associated with memory loss in Alzheimer's disease, leading even to experiments being carried out with MT stabilizing agents (taxanes, originally chemotherapeutic agents) in both Alzheimer mouse models and patients. For more information, see this recent review on the subject.
 
  • #27
Dale said:
So far, I am not convinced that machines are particularly good at learning. For example, an average human teenager can learn to drive a car with about 60 hours of experience. The machines that are learning the same task have millions of hours and are far from average.

The teen with 60 hours of experience is also far from average.
 
  • #28
The limit of machine learning is that it is still too restricted to certain kinds of problems. For example, we know how to solve optimization problems and we know how to solve classification and clustering problems. But humans classify, cluster, optimize and utilize far more advanced tricks than any algorithm is capable of performing, and they do it all day every day over decades.

In terms of machine learning, the brain is analogous to a complex system of deep spiking neural networks that possesses recurrences and convolutions. These networks form functional modules but also communicate with other modules, a phenomenon that probably gives rise to the flexibility of our cognition and let's us "think outside the box", playing around with symbols and ideas in ways that would not otherwise be possible.

It is the ultimate goal of the machine learning program to develop such a flexible algorithm for learning, but I doubt it can ever be done without a complex system approach. Marvin Minsky warned of the deceptive idea to peek inside the brain and try to find a "mind" responsible for intelligence, when every component of the brain is itself unintelligent and the mind is just a holistic property of the system.
 
  • Like
Likes jerromyjon
  • #29
Krunchyman said:
It is the ultimate goal of the machine learning program to develop such a flexible algorithm for learning, but I doubt it can ever be done without a complex system approach
Krunchyman said:
But humans classify, cluster, optimize and utilize far more advanced tricks than any algorithm is capable of performing, and they do it all day every day over decades.
And with a minimal portion of the genetic code which does far more than just "execute commands", it also has a portion which encodes the machinery to construct and initiate it... I wonder what that adds up to in bits of DNA compared to source codes and data, not that it is like comparing apples to apples.
 
  • #30
A couple of computer technologists that I know are skeptical about AI largely because machine learning often requires huge amounts of data. That said, they do believe that most tasks currently done by humans will someday be done by computerized machines. This will lead to a crisis of employment when human labor becomes obsolete. This does not mean that machine thinking will be like human thinking. But it does mean that individual tasks will be mechanized.

More broadly, one might ask what sort of machine the human brain is. And even broader than that, what sort of models of thinking are there? A nerve cell is just an on/off switch with a threshold trigger and is easily modeled on a computer. Also simple nervous systems - e.g. the nervous systems of some species of clams - have been completely modeled by finite state machines. This sort of consideration would suggest that the human mind is an extremely complex finite state machine. Some have suggested that the brain may also use quantum computing. Whether or not this is true, quantum computing seems to be another possible model.
 
Last edited:
  • #31
FallenApple said:
But there's no way a learning algorithm can make any of those connections because those deductions are mostly intuition and logic, which are not statistical. Not to mention, how would ML look at confounders?
I think that you are seriously underestimating the variety and seriousness of the research being done. There are already symbolic logic manipulators and theorem provers in practical applications and in general use. There are other research efforts that manipulate relationships, looking for fundamental theorems. It is a misconception to think that the state of the art of machine learning is limited to data analysis.
 
  • #32
FallenApple said:
From what I understand, machine learning is incredibly good at making predictions from data in a very automated/algorithmic way.
This is a meaningless generalization. If the data is under sampled, inaccurately labeled (which is most of the time), or complex (e.g. one sample is hundreds of gigabytes in size), or requires high accuracy,machine learning is an atrocious approach. The majority of problems have these downsides.

Also, the effectiveness depends not only on these generalizations, but also on the method and specific problem. Clustering is hugely inaccurate and heavily dependent upon human intervention ("What is a cluster? What is similarity?"), making it very vulnerable to the "high accuracy" weakness, since constructing the similarity model either requires a vast amount of data you don't have or very good human intuition. Classification can be much easier since it is not usually constrained in the same way.

Finally, machine learning does not make predictions from data in an automated/algorithmic way; it makes models, which require some form of assumptions, in an algorithmic/automated way, and these models make the predictions. This is more than a trite observation. Consider clustering. For typical methods (e.g. k-means), you are deciding what function determines similarity (in this case, d dimensional Euclidean distance). The assumption that d-dimensional Euclidean distance is usually nonsense, and is usually not checked in any meaningful way, from my experience.

The only advantage of ML is not prediction accuracy; it is automation. Thus, if I run a company or an experiment that generates large amounts of data, ML is a useful way to write programs to build models that use this data, some times updating automatically. However, you still have to figure out how to model the data. In principle you can build every part of the model directly from data; for instance, a neural network can be fitted to compute similarity instead of a Euclidean norm, and a different clustering algorithm can be used. The difficulty is that you will essentially never have the data or computational resources to do this; it's like trying to simulate an integrated circuit using density functional theory to model all of the electronics from atoms up (i.e. stupid). You have to truncate and make modeling assumptions somewhere; they even appear in how you label the data and train the NN.
 
  • #33
FallenApple said:
So basically, a machine learning algorithm would need human level intelligence and intuition to be able to do proper causal analysis?
Machine learning cannot "pull a rabbit out of a hat". There is no magic. A neural net has to be fine tuned to be able to make any reasonable associations from large training data sets. I don't know for sure but I don't think there are any large data sets of causal analysis for it to "learn" from, and even if there are they aren't like humans where they can take a data set and expand upon it to make sense of unfamiliar connections.
 
  • #34
This thread is diverging from AI and going into too much personal opinion. I am moving it to General Discussion. Why? because there are some good posts here mixed with less useful opinion. We do not need to throttle people for lack of a scientific poise, if the thread lives in GD.

Thread moved.
 
  • #35
lavinia said:
A couple of computer technologists that I know are skeptical about AI largely because machine learning often requires huge amounts of data.
There's a new version of AlphaGo which seems to use minimal input data. To quote from https://deepmind.com/blog/alphago-zero-learning-scratch/:
Deepmind said:
Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play.
 
  • Like
Likes jerromyjon
<h2>1. What is the definition of "causal analysis" in the context of machine learning?</h2><p>Causal analysis in the context of machine learning refers to the process of determining cause-and-effect relationships between variables in a dataset. It involves identifying which variables have a direct impact on the outcome and which are simply correlated.</p><h2>2. What are the main limitations of using machine learning for causal analysis?</h2><p>One of the main limitations of using machine learning for causal analysis is the potential for biased or inaccurate results. This can occur due to biased data, flawed assumptions, or inadequate model selection. Additionally, machine learning algorithms are not able to determine causality on their own and require human interpretation and understanding of the data.</p><h2>3. How can we address the limitations of machine learning in causal analysis?</h2><p>To address the limitations of machine learning in causal analysis, it is important to carefully consider the data used and ensure it is representative and unbiased. Additionally, using a combination of different machine learning algorithms and techniques can help to reduce the risk of inaccurate results. It is also important to have a thorough understanding of the data and the problem at hand before applying machine learning techniques.</p><h2>4. Are there any specific types of data that are more challenging for machine learning algorithms to analyze causally?</h2><p>Yes, there are certain types of data that can be more challenging for machine learning algorithms to analyze causally. For example, data with a high degree of noise or missing values can make it difficult to determine causality. Time-series data can also be challenging as it may involve complex relationships between variables that are constantly changing.</p><h2>5. Can machine learning algorithms be used to establish causality or only to identify correlations?</h2><p>Machine learning algorithms can only identify correlations and cannot establish causality on their own. This is because they are based on statistical methods and cannot determine causality without human interpretation and understanding of the data. However, machine learning can be a useful tool in identifying potential causal relationships and providing insights for further investigation.</p>

1. What is the definition of "causal analysis" in the context of machine learning?

Causal analysis in the context of machine learning refers to the process of determining cause-and-effect relationships between variables in a dataset. It involves identifying which variables have a direct impact on the outcome and which are simply correlated.

2. What are the main limitations of using machine learning for causal analysis?

One of the main limitations of using machine learning for causal analysis is the potential for biased or inaccurate results. This can occur due to biased data, flawed assumptions, or inadequate model selection. Additionally, machine learning algorithms are not able to determine causality on their own and require human interpretation and understanding of the data.

3. How can we address the limitations of machine learning in causal analysis?

To address the limitations of machine learning in causal analysis, it is important to carefully consider the data used and ensure it is representative and unbiased. Additionally, using a combination of different machine learning algorithms and techniques can help to reduce the risk of inaccurate results. It is also important to have a thorough understanding of the data and the problem at hand before applying machine learning techniques.

4. Are there any specific types of data that are more challenging for machine learning algorithms to analyze causally?

Yes, there are certain types of data that can be more challenging for machine learning algorithms to analyze causally. For example, data with a high degree of noise or missing values can make it difficult to determine causality. Time-series data can also be challenging as it may involve complex relationships between variables that are constantly changing.

5. Can machine learning algorithms be used to establish causality or only to identify correlations?

Machine learning algorithms can only identify correlations and cannot establish causality on their own. This is because they are based on statistical methods and cannot determine causality without human interpretation and understanding of the data. However, machine learning can be a useful tool in identifying potential causal relationships and providing insights for further investigation.

Similar threads

  • Programming and Computer Science
4
Replies
107
Views
5K
  • STEM Academic Advising
Replies
5
Views
809
  • Differential Equations
Replies
1
Views
658
  • Programming and Computer Science
Replies
29
Views
2K
  • STEM Academic Advising
Replies
9
Views
1K
  • STEM Academic Advising
Replies
1
Views
1K
  • STEM Academic Advising
Replies
6
Views
1K
  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
997
Back
Top