What are the Limitations of Machine Learning in Causal Analysis?

Click For Summary
Machine learning excels at making predictions but struggles with causal analysis, which requires human intuition and judgment. For instance, while an ML algorithm may identify a correlation between low socioeconomic status and diabetes, it cannot deduce the underlying causal mechanisms without human insight. The discussion highlights that human analysts can use deductive reasoning to explore potential intermediate variables, a capability that current AI lacks. Although AI may eventually surpass humans in specific tasks, it remains limited in understanding complex causal relationships. Overall, the consensus is that while AI can handle large datasets, it cannot replicate the nuanced reasoning required for thorough causal analysis.
  • #61
I agree with the concept that machines have no limit, but comparing them to human intelligence seems arbitrary and anthropocentric. We'd also have to make clear that we're talking about very long periods of time. 100 years from now? Yeah, there's a lot of limits and I don't see a Bender Rodriguez strutting down the street in that amount of time, but they'll certainly be driving our cars and flying our planes. 1000 years? Mmm... maybe? 10,000, absolutely.

Of course, assuming humans destroy ourselves, which in theory, we don't have to.

It shouldn't really even be debatable of whether or not an AI could ever match a human. The human brain is a chemical machine, we have programmable maths that describes that chemistry. Given enough computing power, you could build a brain from atoms up inside a computer. It's unlikely we'd ever need to do anything like that and I don't see humanity having that much computing power any time soon, but there's nothing really preventing it in theory. The only real limit is energy and matter.

Natural selection gave us our thought process in about 600 million years, I'd think intelligent design could beat that by several orders of magnitude.I'm weary of AI in the long term. I don't think anyone alive today has to worry about it, but I see it as one of the potential "great filters" for life in the Fermi paradox. I see no reason to fear any individual AI, but the fact that they are immortal means that the number of them will grow very rapidly. I think they'll be very individualized, and be a result of their "upbringing." They'll be as diverse as humans and while I believe that most humans are good. .. Hitler existed.
 
Technology news on Phys.org
  • #62
A. Neumaier said:
Many classification problems solved by computers better than by humans are also not well-defined problems.
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
 
  • #63
Auto-Didact said:
I don't doubt that at all, but I wouldn't lump all non-well-defined problems in the same category. There are several different degrees of being badly defined, some of which are still perfectly solvable - sometimes even with trivial ease - by some humans, despite all their vagueness.
Please give examples.
 
  • #64
A. Neumaier said:
Please give examples.
I'm currently publishing two papers on this topic; will link when they are done.
 

Similar threads

  • · Replies 119 ·
4
Replies
119
Views
9K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
29
Views
5K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
7K