A Causal inference developed by Pearl

  • A
  • Thread starter Thread starter Demystifier
  • Start date Start date
Demystifier
Science Advisor
Insights Author
Messages
14,567
Reaction score
7,160
I have some questions about causal inference developed by Pearl. For a background see e.g. the thread https://www.physicsforums.com/threads/the-causal-revolution-and-why-you-should-study-it.987205/
and wikipedia page https://en.wikipedia.org/wiki/Causal_model

Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference. Is there an example of an actual scientific problem that scientist couldn't solve with intuitive causal thinking, but solved it with the use of formal causal inference?
 
Physics news on Phys.org
Demystifier said:
I have some questions about causal inference developed by Pearl. For a background see e.g. the thread https://www.physicsforums.com/threads/the-causal-revolution-and-why-you-should-study-it.987205/
and wikipedia page https://en.wikipedia.org/wiki/Causal_model

Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference. Is there an example of an actual scientific problem that scientist couldn't solve with intuitive causal thinking, but solved it with the use of formal causal inference?
I think that the real problem is that scientists and everyone else use intuitive causal thinking when there is no valid reason to. Formal causal methods may be safer to use to avoid false conclusions about causality.

Example: There are innumerable examples where someone will say (or imply) something like "better sleep gives you better health" based on statistical evidence. But they do not say how they adjusted for the possibility that people in poor health may not sleep well. So the causality may be the opposite of what they are implying. The same thing occurs with exercise, social interaction, etc. "causing" better health or longer life.
I see examples like that all the time.
 
  • Like
Likes Demystifier
FactChecker said:
But they do not say how they adjusted for the possibility that people in poor health may not sleep well. So the causality may be the opposite of what they are implying.
Yes, but the point is that you recognized this possibility without using formal causal inference. You, as well as scientists in general, are aware of this intuitively. So how exactly the formal causal inference helps, in "real life" of a scientist?
 
  • Like
Likes FactChecker
What about the cases in math where a pattern is found and a conjecture made that a counterexample disproved later on?
 
jedishrfu said:
What about the cases in math where a pattern is found and a conjecture made that a counterexample disproved later on?
Mathematicians resolved it without using the method of causal inference by Pearl.
 
Demystifier said:
Yes, but the point is that you recognized this possibility without using formal causal inference. You, as well as scientists in general, are aware of this intuitively. So how exactly the formal causal inference helps, in "real life" of a scientist?
Good point. I am not an expert on this so I don't know if there are examples of practical cases where formal causal inference really paid off.
I am not as confident as you seem to be that scientists intuitively recognize the error of assuming that a particular correlation that they have established implies causation. I think they often jump to an unproven conclusion.
 
Demystifier said:
I have some questions about causal inference developed by Pearl. For a background see e.g. the thread https://www.physicsforums.com/threads/the-causal-revolution-and-why-you-should-study-it.987205/
and wikipedia page https://en.wikipedia.org/wiki/Causal_model

Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference. Is there an example of an actual scientific problem that scientist couldn't solve with intuitive causal thinking, but solved it with the use of formal causal inference?
Interesting. I have seen the math in this called structural equation modeling. I never heard of it in terms of causality. So I am aware of the method, but not the causal interpretation.
 
FactChecker said:
I am not as confident as you seem to be that scientists intuitively recognize the error of assuming that a particular correlation that they have established implies causation. I think they often jump to an unproven conclusion.
Often they do jump, of course. But later other scientists often correct them.
 
  • Like
Likes FactChecker
Demystifier said:
Often they do jump, of course. But later other scientists often correct them.
I am probably not being fair. I do not have the time or background to read and understand the scientific publications. It is the general public news reports that I am complaining about.
 
  • Like
Likes Demystifier
  • #10
Demystifier said:
Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference.
Do you agree that P(A | B) is a causal relationship?
That is to say, P(A) given P(B) is a mathematical model of dependence, with a before/after status and causality?
 
  • #11
AngleWyrm said:
Do you agree that P(A | B) is a causal relationship?
No, I do not agree. If P(A|B) > P(A), then it is often also true that P(B|A) > P(B). So which is causing the other? Or did neither cause the other and something else caused them both? Probability relationships alone should not be mistaken for proof of a causal relationship.

Suppose A="a person has long arms" and B="a person has long legs". You can not say that the long arms caused the long legs, or that the long legs caused the long arms. Something else caused them both.
In fact, I see this mistake all the time: "If you sleep well then you will be healthier." But also people who are sick and in pain can not sleep as well as people who are healthy. So which is causing the other?
Also: "If you exercise more, then you will live longer." But people who are sick have trouble exercising, so which causes the other?
 
  • Like
Likes Demystifier
  • #12
FactChecker said:
No, I do not agree. If P(A|B) > P(A), then it is often also true that P(B|A) > P(B). So which is causing the other?
Then it should be possible to say that one of these two conditions holds:
  1. P(A|B) > P(A) AND P(B|A) > P(B), the above stated scenario
  2. the relationship measured in #1 isn't the case
Since the statement in #1 is a logical AND operation, there are four possible outcomes. So it can be represented as a Venn diagram:

Untitled.png


Example case
Two variables A and B are claimed to be P(A)=0.90, P(B)=0.20. In actuality, they merely start out that way. It's a simulation of drawing cards from a deck, and so the results alternately accumulate a causal effect on each other.

500 ObservationsBnot B
A104340
not A551

ExpectedB (0.20)not B (0.80)
A (0.90)0.9 × 0.2 = 0.18 (90/500)0.9 × 0.8 = 0.72 (360/500)
not A (0.10)0.1 × 0.2 = 0.02 (10/500)0.1 × 0.8 = 0.08 (40/500)

Chi squared test for independence 8.8139 > 3.8415 so the observations are outside independence with 95% certainty.
 
Last edited:
  • #13
AngleWyrm said:
Do you agree that P(A | B) is a causal relationship?
No. It is a quantity, not even a relationship, let alone a causal one.
 
  • Haha
Likes Demystifier
  • #14
AngleWyrm said:
Then it should be possible to say that one of these two conditions holds:
  1. P(A|B) > P(A) AND P(B|A) > P(B), the above stated scenario
  2. the relationship measured in #1 isn't the case
Yes, either #1 is true or it is not.
AngleWyrm said:
Since the statement in #1 is a logical AND operation, there are four possible outcomes. So it can be represented as a Venn diagram:

View attachment 313285

Example case
Two variables A and B are claimed to be P(A)=0.90, P(B)=0.20. In actuality, they merely start out that way. It's a simulation of drawing cards from a deck, and so the results alternately accumulate a causal effect on each other.
What does "causal effect" mean?
AngleWyrm said:
500 ObservationsBnot B
A104340
not A551

ExpectedB (0.20)not B (0.80)
A (0.90)0.9 × 0.2 = 0.18 (90/500)0.9 × 0.8 = 0.72 (360/500)
not A (0.10)0.1 × 0.2 = 0.02 (10/500)0.1 × 0.8 = 0.08 (40/500)

Chi squared test for independence 8.8139 > 3.8415 so the observations are outside independence with 95% certainty.
That doesn't say anything about causality. Statistical or probabilistic dependence does not, alone, imply anything about causality. There are simple counterexamples to any conclusion you try to state.
If a man has a short right arm it tends to imply that he has a short left arm, but it does not cause it. If the right arm is chopped off, the left arm does not fall off.
 
  • #15
I guess we also have to filter out the issue of lurking variables before we get to any potential causality.
An increase in consumption of ice cream is followed by an increase in skin cancer rates. or the number of shark attacks. But we may have perfect correlations without causation: betwee length in feet and in meters.
 
  • Like
Likes FactChecker
  • #16
WWGD said:
I guess we also have to filter out the issue of lurking variables before we get to any potential causality.
Yes, and I think that is what they try to do in a "causal model". They try to hold all other variables in set K constant.
##P(Y|X K=k) \gt P(Y| K=k)##
But there is a lot to do to make that work correctly. I think it might just be a theoretical exercise, but I am not familiar with this subject.
 
  • Like
Likes WWGD
  • #17
FactChecker said:
What does "causal effect" mean?
Given P(cancer | smoker) > P(cancer) -- cancer is more prevalent in smokers -- does that declare smoking causes cancer? Not quite yet, because it's also possible P(smoker | cancer) > P(smoker) -- smokers are more common in cancer victims. Simpson's Paradox.

This leads to two classifications:
  1. P(cancer|smoker)>P(cancer) AND P(smoker|cancer)>P(smoker)
  2. #1 is false
Thus P(cancer|smoker)>P(cancer) XOR P(smoker|cancer)>P(smoker) asserts causality and selects culprit
 
  • #18
AngleWyrm said:
Thus P(cancer|smoker)>P(cancer) XOR P(smoker|cancer)>P(smoker) asserts causality and selects culprit
Do you have a professional scientific reference that supports this claim?
 
  • Like
Likes FactChecker
  • #19
Dale said:
Do you have a professional scientific reference that supports this claim?
I believe at this stage , causation is more a philosophical matter than a scientific one.
 
  • #20
AngleWyrm said:
Thus P(cancer|smoker)>P(cancer) XOR P(smoker|cancer)>P(smoker) asserts causality and selects culprit
No. You have to give up the idea that simple probabilities alone can give a conclusion about causality. They are not the same. You CAN say that one is a good predictor of the other, but that is not the same as physically causing the other.
 
  • #21
FactChecker said:
No. You have to give up the idea that simple probabilities alone can give a conclusion about causality. They are not the same. You CAN say that one is a good predictor of the other, but that is not the same as physically causing the other.
I'd like something more than a verbal assertion of falsehood on my claim that causality is a demonstrable subset of two dependent variables.
 
  • #22
AngleWyrm said:
Thus P(cancer|smoker)>P(cancer) XOR P(smoker|cancer)>P(smoker) asserts causality and selects culprit
This XOR condition can never be true except in the degenerate case where one probability (P(smoker) or P(cancer)) is zero. Suppose that ##P(C|S)\gt P(C)## and that neither ##P(C)## nor ##P(S)## is zero.
By Bayes probability formula, ##P(S|C)=\frac{P(C|S)P(S)}{P(C)} \gt P(S)##
 
Last edited:
  • #23
AngleWyrm said:
I'd like something more than a verbal assertion of falsehood on my claim that causality is a demonstrable subset of two dependent variables.
I have already given some examples of the falsehood of your claim. That should be enough, but you seem to be missing my point.
 
  • Like
Likes Dale
  • #24
FactChecker said:
Suppose that ##P(C|S)\gt P(C)## and that neither ##P(C)## nor ##P(S)## is zero.
By Bayes probability formula, ##P(S|C)=\frac{P(C|S)P(S)}{P(C)} \gt P(S)##
I think that this may illustrate the difference between S causing C versus S increasing the probability of C.
In probability, if S increases the probability of C ( i.e. ##P(C|S)\gt P(C)##), then it follows that C increases the probability of S (i.e. ##P(S|C)\gt P(S)##). That is not like "causation", you would not say that S causing C meant that C causes S. So there is a real fundamental difference between probabilistic inference and causation.
 
  • #25
WWGD said:
I believe at this stage , causation is more a philosophical matter than a scientific one.
So then it isn’t a valid topic of discussion
 
  • #26
AngleWyrm said:
I'd like something more than a verbal assertion of falsehood on my claim that causality is a demonstrable subset of two dependent variables.
You already received that. However, it is important to understand that this is not how this forum is supposed to work. When you make a claim the onus is on you. You are the one who needs to show that your claim is correct, not the other way around.

Please PM me with a reference that supports your claim. I don’t think that one exists.
 
  • Like
Likes FactChecker
  • #27
Suppose A causes B. You would not say that B causes A.
But the probabilities do something like that. Because ##P(A|B)P(B) = P(A\cap B) = P(B|A)P(A)##, we have that ##\frac{P(A|B)}{P(A)} = \frac{P(B|A)}{P(B)}##. So if B increases the probability of A then also A increases the probability of B by the same factor. Clearly, using an increase in probability as the sole evidence of causation is wrong. In fact, there is no way to use these conditional probabilities to rule out some other thing being the physical cause of both A and B.
 
Last edited:
  • #28
Demystifier said:
I have some questions about causal inference developed by Pearl. For a background see e.g. the thread https://www.physicsforums.com/threads/the-causal-revolution-and-why-you-should-study-it.987205/
and wikipedia page https://en.wikipedia.org/wiki/Causal_model

Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference. Is there an example of an actual scientific problem that scientist couldn't solve with intuitive causal thinking, but solved it with the use of formal causal inference?
I don't know of specific papers off the top of my head, but it is used in fields like epidemiology and behavioral sciences as far as I am aware.

Basically, if your model is not setup properly then you may not "measure" what you want to. So you can encode your model as a causal graph and then check it. These are usually probabilistic models and not the deterministic models we rely on for much of physics. As such, it doesn't really apply in that domain. There is a good discussion why on Sean Carroll's podcast with Pearl, although he seems to be working on something using this framework.

https://www.preposterousuniverse.com/podcast/2022/05/09/196-judea-pearl-on-cause-and-effect/

I am no expert, though, so take the above with a grain of salt.
 
  • #29
WWGD said:
I believe at this stage , causation is more a philosophical matter than a scientific one.
When we say that smoking causes cancer, is it philosophy or science?
 
  • #30
Demystifier said:
When we say that smoking causes cancer, is it philosophy or science?
I was referring more to framing the concept in ways that are clear and specific enough to be used, applied.
 
  • #31
WWGD said:
I was referring more to framing the concept in ways that are clear and specific enough to be used, applied.
I disagree. Pearl's framework is specific and clear. I think most just have an ignorance of the methods.
 
  • Like
Likes Dale
  • #32
jbergman said:
I disagree. Pearl's framework is specific and clear. I think most just have an ignorance of the methods.
Or, like me, are even aware of the mathematical and statistical methods, but not of the connection to causal inference
 
  • Like
Likes jbergman
  • #33
Dale said:
Interesting. I have seen the math in this called structural equation modeling. I never heard of it in terms of causality. So I am aware of the method, but not the causal interpretation.
Structural Equation Modeling is an integral part of the New Causal Revolution. Indeed, it is the glue that shows why Pearl's Directed Acyclic Graph approach and Rubin's Potential Outcomes Framework are equivalent.
AngleWyrm said:
Do you agree that P(A | B) is a causal relationship?
That is to say, P(A) given P(B) is a mathematical model of dependence, with a before/after status and causality?
At the intervention level (Pearl's "Second rung"), causality looks like this: ##P(Y|do(X))>P(Y).## In this case, we would say that ##X## causes ##Y.## Here, the ##do## operator means "force the variable ##X## to have the value ##x.## Much of Pearl's framework has to do with eliminating the ##do## expression so that you can get an expression you can evaluate in terms of data (the ##do## operator is not directly measurable). Conditional probability on its own is utterly incapable of expressing the ideas of causality.
 
Last edited:
  • Like
Likes Dale, jbergman, FactChecker and 1 other person
  • #34
Demystifier said:
I have some questions about causal inference developed by Pearl. For a background see e.g. the thread https://www.physicsforums.com/threads/the-causal-revolution-and-why-you-should-study-it.987205/
and wikipedia page https://en.wikipedia.org/wiki/Causal_model

Scientists think in causal terms intuitively, even without knowing about formal theory of causal inference. Is there an example of an actual scientific problem that scientist couldn't solve with intuitive causal thinking, but solved it with the use of formal causal inference?
One almost-example is the idea that smoking causes lung cancer. Now this had actually been causally established before Pearl's ideas came out. However, it is possible to show this using the front-door criterion - a dimension of Pearl's framework. Pearl's magnum opus, Causality: Models, Reasoning, and Inference, has been cited thousands of times. If you want examples of how the New Causal Revolution has produced causal information that wouldn't otherwise have been obtained, look at the papers that cite that book.

The fact is this: before the New Causal Revolution, if you wanted to demonstrate causality, you had one and only one tool: the Randomized Controlled Trial (RCT) - the good ol' tried-and-true experiment. This harks back to Mill's Methods for demonstrating causality, as well as Francis Bacon. If you didn't manipulate variables (force them to have particular values), then you didn't have causality. An observational study, in particular, was utterly incapable of demonstrating causality because by definition you don't manipulate variables in an observational study.

However, with the coming of the New Causal Revolution, while you certainly still have the experiment available, you can get causality from an observational study, given the right data and the right model. How does that all work? Study the New Causal Revolution (see link in the OP) to find out! The importance of the New Causal Revolution is that many of the experiments you might like to run are impractical or unethical (smoking, anyone?). So we can still get causality sometimes, even when a RCT is not available.
 
  • Like
  • Informative
Likes jbergman and FactChecker
  • #35
WWGD said:
I believe at this stage , causation is more a philosophical matter than a scientific one.
I strongly disagree with this. Science has long been concerned with causation - I would say primarily concerned with causation. It's the most important question! Mill's Methods show how an experiment demonstrates causality, but as I have just said in this thread, the New Causal Revolution has demonstrated how you can get causality from an observational study, given the right conditions. This opens up many new possibilities.

The field of statistics, for a long time, distanced itself from causality because it didn't have the vocabulary and tools necessary to deal with it, other than in experiments. But again, the New Causal Revolution has changed all that.
 
  • Like
Likes jbergman
  • #36
Ackbach said:
I strongly disagree with this. Science has long been concerned with causation - I would say primarily concerned with causation. It's the most important question! Mill's Methods show how an experiment demonstrates causality, but as I have just said in this thread, the New Causal Revolution has demonstrated how you can get causality from an observational study, given the right conditions. This opens up many new possibilities.

The field of statistics, for a long time, distanced itself from causality because it didn't have the vocabulary and tools necessary to deal with it, other than in experiments. But again, the New Causal Revolution has changed all that.
Please see my reply sbove to Demystifier. That is what I meant. But , yes, @Bergmann , I was not saying Pearl does not provide a clear setup; I am not familiar with it. I meant now science must sbsorb it and work with it. I will read it when I get a chance. I am not saying that the concept is not relevant to science, only that at this point it is at its infancy and hasn't been yet absorbed. Thats all.
 
  • Like
Likes jbergman, Dale and Ackbach
  • #37
Ackbach said:
Science has long been concerned with causation - I would say primarily concerned with causation.
I think that the truth is probably somewhere between your position and @WWGD's position.

Science students tend to dramatically overly apply causation and causality. It is something that has to be corrected frequently.

For example, Newton's 3rd law can be written ##\vec F_{ij}=-\vec F_{ji}##. It is common for students to believe that the force on the left is an "action" which causes the "reaction" force on the right. They can then become confused on how to apply Newton's 3rd when the cause and effect is not clear. Since causes precede effects and since the forces in Newton's 3rd law are simultaneous they generally should not be thought of in terms of cause and effect. Even worse is if they do find a pair of causally related forces (one preceding the other) and try to apply Newton's 3rd law across time.

Another example is Maxwell's equations. $$ \nabla \cdot \vec E = \rho $$$$\nabla \cdot \vec B = 0$$$$\nabla \times \vec E = -\partial_t \vec B$$$$\nabla \times \vec B = \vec J + \partial_t \vec E$$ Not just students, but also more experienced scientists will describe the left hand side as effects and the right hand side as causes. They will even describe light as "changing E fields causing changing B fields causing changing E fields and repeating" while referring to these equations. This has the same problem as above: causes precede effects but the things in Maxwell's equations happen at the same time.

There is a causal formulation of electromagnetism called Jefimenko's equations (or rather the retarded potentials): $$\phi(\vec r,t)=\int\frac{\rho(\vec r',t_r)}{|\vec r-\vec r'|} d^3\vec r'$$$$ \vec A(\vec r,t)=\int \frac{\vec J(\vec r',t_r)}{|\vec r-\vec r'|} d^3\vec r'$$$$t_r=t-\frac{\vec r-\vec r'}{c}$$In this formula causes on the right side of the equations precede effects on the left side. This does express a true causaul relationship, but such equations are actually rather uncommon so I wouldn't say that science is primarily concerned with causation. It is certainly a topic of some concern, but not so ubiquitously as you imply. Even when causal relations do exist, they are often not the most convenient or useful approach to a phenomenon.
 
  • Like
Likes Ackbach
  • #38
WWGD said:
Please see my reply sbove to Demystifier. That is what I meant. But , yes, @Bergmann , I was not saying Pearl does not provide a clear setup; I am not familiar with it. I meant now science must sbsorb it and work with it. I will read it when I get a chance. I am not saying that the concept is not relevant to science, only that at this point it is at its infancy and hasn't been yet absorbed. Thats all.
Yes, I agree. The New Causal Revolution needs to make significant inroads on traditional statistics and science education, and it hasn't, yet.
Dale said:
I think that the truth is probably somewhere between your position and @WWGD's position.

Science students tend to dramatically overly apply causation and causality. It is something that has to be corrected frequently.

For example, Newton's 3rd law can be written ##\vec F_{ij}=-\vec F_{ji}##. It is common for students to believe that the force on the left is an "action" which causes the "reaction" force on the right. They can then become confused on how to apply Newton's 3rd when the cause and effect is not clear. Since causes precede effects and since the forces in Newton's 3rd law are simultaneous they generally should not be thought of in terms of cause and effect. Even worse is if they do find a pair of causally related forces (one preceding the other) and try to apply Newton's 3rd law across time.

Another example is Maxwell's equations. $$ \nabla \cdot \vec E = \rho $$$$\nabla \cdot \vec B = 0$$$$\nabla \times \vec E = -\partial_t \vec B$$$$\nabla \times \vec B = \vec J + \partial_t \vec E$$ Not just students, but also more experienced scientists will describe the left hand side as effects and the right hand side as causes. They will even describe light as "changing E fields causing changing B fields causing changing E fields and repeating" while referring to these equations. This has the same problem as above: causes precede effects but the things in Maxwell's equations happen at the same time.

There is a causal formulation of electromagnetism called Jefimenko's equations (or rather the retarded potentials): $$\phi(\vec r,t)=\int\frac{\rho(\vec r',t_r)}{|\vec r-\vec r'|} d^3\vec r'$$$$ \vec A(\vec r,t)=\int \frac{\vec J(\vec r',t_r)}{|\vec r-\vec r'|} d^3\vec r'$$$$t_r=t-\frac{\vec r-\vec r'}{c}$$In this formula causes on the right side of the equations precede effects on the left side. This does express a true causaul relationship, but such equations are actually rather uncommon so I wouldn't say that science is primarily concerned with causation. It is certainly a topic of some concern, but not so ubiquitously as you imply. Even when causal relations do exist, they are often not the most convenient or useful approach to a phenomenon.
Perhaps. But I can't help thinking that most scientists want to know why something is happening. They see a phenomenon and want to explain it - that is, they want to explain why. That's causal language.
 
  • Like
Likes WWGD
  • #39
Ackbach said:
Perhaps. But I can't help thinking that most scientists want to know why something is happening. They see a phenomenon and want to explain it - that is, they want to explain why. That's causal language.
Yes, but that is much trickier than many realize. When forces are in balance, it is often true that they coexist and neither can be said to cause the other. They are just in balance and may remain stable that way for a long time.
 
  • #40
Ackbach said:
Perhaps. But I can't help thinking that most scientists want to know why something is happening. They see a phenomenon and want to explain it - that is, they want to explain why. That's causal language.
Not always. "Why" is broader than causality.

Scientifically "why" can also refer to implication. E.g. you might ask "Why does a fast moving clock tick slower than coordinate time in a given reference frame?" The answer could reasonably be Einstein's two postulates, but the two postulates are not causes that precede effects, they are logical principles from which physical phenomena can be deduced. So it is not a causal relationship that is sought with this "why" question.

"Why" can also signal a request for an explanation in terms of a different theory. Especially when asking about why some classical behavior occurs in terms of some underlying quantum mechanical phenomena. Or when asking about some Newtonian gravitational behavior in terms of general relativity. A more general theory does not precede an approximate theory in any meaningful sense, and in fact historically usually the approximate theory precedes the general theory. So again, it is not a causal relationship that is sought with this "why" question.

Non scientifically "why" can also refer to motivation. In psychology motivations could be considered causes of behaviors, but in physics we try to avoid motivation-based why questions.

So "explain", "why" and even "want to explain why" are not always causal language. The non-causal "why" questions in physics are very important, and perhaps even dominant. In particular, theoretical physics is almost always focused on the non-causal meanings of "why". Which is why (implication) I think that your "primarily" assertion is overly broad.

Not that you are wrong that causality is important nor are you wrong that finally having a framework for causality is very cool, but I think you are overstating your case. This causal inference stuff is interesting enough on its own that it is not necessary to overstate and oversell it.
 
Last edited:
  • Like
Likes Demystifier, pbuk, FactChecker and 1 other person
  • #41
I was surprised recently, finding out the extensive role that Category Theory plays in the causation layout.
 

Similar threads

Replies
6
Views
3K
Replies
13
Views
6K
Replies
3
Views
3K
Replies
10
Views
3K
  • Sticky
Replies
0
Views
4K
Replies
27
Views
3K
Back
Top