Systems/processes which create conditional independence?

In summary: This seems relevant, but it's a bit dry and academic. I think you might be better off looking for papers/articles that specifically discuss independence/separability in the context of control systems.In summary, independence and separability are concepts that describe how a system can be determined by what happens locally, without being influenced by what happens elsewhere.
  • #1
query_ious
23
0
Hi,

I've been wondering recently what types of systems/processes can give rise to independence. E.g. 'a' and 'b' are independent only given that the system constraints exist.

I'm coming from biology so by independence I don't necessarily mean the strict mathematical version but something like 'So long as nothing catastrophic happens I can predict with OK accuracy what will happen in 'a' regardless of what happens in 'b'

A few thought bins I'm familiar with include -
1. Separation of location -
The cell contains multiple intracellular compartments, things that happen in one are, to an extent, independent of things that happen in another (ER/Golgi vs. mitochondria for instance).
You can also think of organs in the body if you like, your stomach and GI tract can process food whether or not your upper arms are functional.
2. Separation of timescale:
For example altering proteins with chemical modifications occurs extremely quickly while creating more proteins takes a long time (relatively speaking). So the chemical modifications reach equilibrium before the overall protein concentration changes and the protein concentration never actually 'sees' the chemical modification taking place - in some sense the two are independent.
3. Different languages -
E.g if two receptor-ligand complexes are physically and temporally adjacent but neither of them has any reactivity towards the other they are 'independent'.
4. Network stuff -
I'm a bit hazier here but my intuition is that if you have a system 'A' which integrates over many discrete entities to create a binary signal and feed it into system 'B' you have a partial decoupling between the input to 'A' and the input to 'B' (analogy is the neuronal synapse - many chemical packets slightly altering membrane potential until you pass a threshold and create a binary action potential). This is a different sense of 'independence' - it is more like 'if you give me some integrated value I can make predictions without knowing exactly how the integrated value came about'

Does this resonate with anything in physics/mathematics? E.g. are there any specific fields/keywords/threads/papers I could follow up which talk about this type of issue (and ideally would have an empirical catalogue of 'thought bins')?

Beyond the mechanisms themselves I'd also be interested in information on what happens when you take multiple such systems/processes and link them together... vague intuition is that this might very quickly create some kind of complexity explosion which means that in meta system A->B->C->D you can approximate each one of the arrows and still fail miserably at trying to get from A to D.

Thanks :)
 
Engineering news on Phys.org
  • #2
You have conditional probabilities whenever knowledge of one state influences possible subsequent outcomes.
i.e. dependent and independent statistical errors on measurements.

Your examples all talk about interactions ... so I think you are musing more on the subject of what counts as an "isolated system".
What determines the number of variables you need to keep track of to have reasonable (vs complete) knowledge of the system being studied?
That kind of thing.

Related to laws of diminishing returns... for instance, a complete state may be represented as a sum over the basis states - but there may be a lot of them. Very often it is only nessesary to take the sum over the first few to get a useful result. You don't have to worry about the apex carnivore 20 miles away when studying the behavior of the herd of herbevores right here.

It's a huge subject so most people narrow it down.
 
  • #3
Thanks.

Do you know if engineers intentionally design isolated systems into whatever it is they're trying to build as a control mechanism? If so, how do they go about it?

Do you have any keywords I could follow up on? Quick googles lead to subjects like 'base isolation' e.g. buffering against earthquake-related damage and things like fuzzy control and fuzzy logic but I lack the depth of knowledge to separate useful from useless directions... (Or is this again too general of a question? Are there even general principles which apply to formation and dissolution of approximately isolated systems?)
 
  • #4
Its generally called "putting it in a box"... an easy example is "sandboxing" suspected malware.
Your problem is that you arenot being specific enough.
Isolating systems is a concept with infinite application.
 
  • #5
query_ious said:
I lack the depth of knowledge to separate useful from useless directions...

Usefulness depends on purpose. What is yours?

Are you writing a report on the philosophy of control system design? - or are you designing a specific control system?
 
  • #6
Hi query,
What you seem to be trying to get at seems a bit like something called "separability". That's actually a philosophical term: "... a separable process is one which is wholly determined by what happens locally – by what is happening at each spacetime point where that process is going on. … More precisely, a physical process will be said to be spatio-temporally separable in spacetime region R if and only if it is supervenient upon an assignment of qualitative intrinsic physical properties at spacetime points in R.”

You mention compartments of cells which are generally considered separable. "The cell contains multiple intracellular compartments, things that happen in one are, to an extent, independent of things that happen in another (ER/Golgi vs. mitochondria for instance)." In engineering, there are Finite Element Analysis (FEA) type programs that work like this. They're used to analyze structures, fluid fields, electromagnetic fields, heat transfer, etc... They are also used to analyze interactions between those types of analysis; for example, how stresses are developed by thermal expansion or how fluids might be affected by E/M fields for instance. All of those types of programs are based on the concept of separability. FEA type programs that analyze interactions between different types of causal affects are called "multiphysics software packages". One of the most common is called "Comsol". So Comsol is an FEA type program that can perform a combined analysis for example using heat transfer, thermodynamics and fluid flow in order to simulate Rayleigh-Benard convection.

The same basic philosophy is used to create software to analyze cellular interactions. There are similar FEA type programs (based on separability) for neurons and compartments of neurons. One program called "Genesis" is used to examine the interactions between neurons. From “The Book of Genesis” (an instructional guide to the program), the authors state in section 2.1.1, “When constructing detailed neuronal models that explicitly consider all of the potential complexities of a cell, the increasingly standard approach is to divide the neuron into a finite number of interconnected anatomical compartments. … Each compartment is then modeled with equations describing an equivalent electrical circuit (Rall 1959). With the appropriate differential equations for each compartment, we can model the behavior of each compartment as well as its interactions with neighboring compartments.”

There are some biologists who disagree with the concept of separability, but I'd say they are in the minority. Those are the folks who might suggest there's a "meta system A->B->C->D you can approximate each one of the arrows and still fail miserably at trying to get from A to D... " Yes, you can fail miserably in trying to do that with a biological system, but that isn't because separability fails.

Not sure if that's what you are trying to get at or not, but thought I'd toss it out there. If you need references for any of this, just say so.
 
  • #7
Frequency? I don't know whether or not this counts in context here, but multiplexed signals on a single wire or RF wavelength don't interfere with each other.
 
  • #8
Thanks all!

a few replies:
Stephen Tashi said:
Usefulness depends on purpose. What is yours?
I'm trying to figure out how to think... how to organize my mind.
I'm in grad school in biology, in the 'systems' and 'immunology' departments. Over the past year (my MSc) I've built a technique to collect (dozens-hundreds of) mRNA datapoints from (1e3-1e5) single cells and I'm now moving into the 'collect real data about real questions' stage.

Essentially the type of experiment I'd like to do is to capture a system snapshot- see all the parts, the input and the output at the same time over a few conditions and then begin to play, to understand what makes things tick. Which raises all kinds of theoretical questions like what on Earth constitutes a 'system', 'input', 'output' in the first place and more practical questions like 'how many variables are enough?', 'which variables to choose?', etc..

Which led to two main observations -

Observation #1 is that on the one hand, things in biology are very deeply intertwined and are very relative. (How can general rules exist when so much is determined by local context? How do biological systems 'assign meaning' and 'make decisions' when the criteria for 'good' and 'bad' keep changing?) On the other hand things *work* (biological systems maintain homeostasis and perform ridiculously hard computations). From this I infer/speculate that there must be *something* that holds everything together, some kind of 'operating algorithm'.

Observation #2 is that biology essentially looks (to me) like a supervised dimensionality reduction. E.g. there is no part of biology that is sensitive to all incoming bits of data, each part is responsive to it's own particular subset. For example enzymes have a 3D structure which geometrically constrains the molecules they respond to, cells have receptors which constrain which environmental stimuli they sense, neurons filter raw optical inputs in pre-defined ways (which cause optical illusions), etc. More importantly, this filter is tunable - by modifying the details you can modify which bits you tag as 'functional' and which you ignore.

So my current internal model is that observation 2 - the 'supervised dimensionality reduction' is a big part of the solution to observation 1 - 'how does biology get anything done in real-time? '. Under this assumption I asked, well, if the primary force acting on the system is 'make decisions in real-time' what other things might this force create? Which types of conditions are conducive to decision making? One way to approach this question is to ask how our current information-processing mathematical models work. What assumptions seem ubiquitous in mathematical models which deal with information processing? Here I step way out of my depth but one answer I came up with was 'independence' and the other was 'consistency'. Neither of which can ever be assumed as given in biological systems but (I claim) both can be generated through a specific system architecture or dynamic process. Which led to the above post...

Does that sort-of make sense?

Q_Goest said:
something called "separability".
That was a very nice keyword :) (As was 'supervenience')

From my small literature read-through, it looks like the question of 'how to think about causality in multi-level systems' is something of an up-and-coming topic in the philosophy of science; philosophers don't really know what to do with it. In http://www.jstor.org/stable/10.1086/594515 they say that "requiring explanation to be only by means of universal, exceptionless laws excludes most of biology from being explanatory...what is needed for causation and explanation is not universality but invariance or stability of the causal relationship in various, but not necessarily all, contexts. Three features capture this idea: invariance, modularity, and insensitivity." Modularity is very similar to what I call independence (or 'separability') and invariance/insensitiviy relate to what I think of as consistency. Other interesting articles ( 1 2 3 ) talk about the philosophy behind research into multi-level systems and (one of the things) they say is that things are relative - the significance of datapoints and the types of patterns you look for depend on how you define the system.

Which is reassuring, it's nice to know that other people share my worldview. But - none of the articles that I found actually goes beyond the philosophy of 'what does the data look like and why are our current mode of thinking is wrong' to 'Which types of mechanisms could give rise to a dataset similar to what we're observing? How do these mechanisms combine to give solutions to real-world problems? What happens when these mechanisms interact?' which is closer to what I'm actually interested in...

Q_Goest said:
In engineering, there are Finite Element Analysis (FEA) type programs that work like this
Cool, thanks. From what I've seen these are primarily geometrically defined elements which, when put together, comprise the whole, right? I wonder if it's possible to define the 'elements' not only in terms of geometry but also more abstract criteria. (For example, graphs)

Q_Goest said:
Yes, you can fail miserably in trying to do that with a biological system, but that isn't because separability fails.
Then why is it? When do we fail at moving between levels?

Thanks again for responding :)
 
  • #9
query_ious said:
I'm trying to figure out how to think... how to organize my mind.

Essentially the type of experiment I'd like to do is to capture a system snapshot- see all the parts, the input and the output at the same time over a few conditions and then begin to play, to understand what makes things tick. Which raises all kinds of theoretical questions like what on Earth constitutes a 'system', 'input', 'output' in the first place and more practical questions like 'how many variables are enough?', 'which variables to choose?', etc..

As general statements of goals go , that's remarkably clear.

My somewhat random thoughts:

Of course, breaking things into independent pieces is useful in analyzing something. However, understanding things that are very dependent on each other is equally important.

Sometimes thinking about synthesis is a creative way to do analysis. When thinking about synthesis - i.e. how to design a system, some good advice ( courtesy of architecht Christopher Alexander's observations on "conscious design") is to consider how the aspects of the design relate to each other with respect to being independent, complementary or contradictory - all this vis-a-vis the goals of the design. For example, if a teapot is supposed to be easy to lift, sturdy, and retain heat, there can be be harmony or tension between these design goals in a statistical sense (looking at the population of possible designs.) "Easy to lift" is contradictory to "sturdy" since things that are easy to lift tend to be lightweight and lightweight things tend to be less sturdy that heavy things. There is complementary aspect to being sturdy and retaining heat well since sturdy things tend to have substantial mass. A good design will compromise contradictory aspects or find some clever way where a particular design makes two apparently contradictory aspects actually complement each other.

Before approaching a biological system by analysis, perhaps it necessary to practice on systems whose organization we already know. Computer simulations are such systems. (An amusing idea would be to attempt the anaylsis you describe above on a complicated computer program that isn't intended to simulate biology -(for example, a web server.)

When analyzing a complex system that seem to involve some sort of communication between its parts, I've never see an analysis that treated questions with the same importance as statements. In our verbal communication, questions are important. In packet communicatiion, over networks, some packets are intended to elicit a reply (as in the TCP of TCP/IP). Studying the "transmission of information" focuses attention on the transmission of statements, but transmitting questions should be kept in mind.
 
  • #10
Hi query. The question of causation in biology is not so new. There are many people who have put forth ideas about how to think about causation and those people don’t necessarily know about each other. Different authors use different terminology and seem to have differing opinions. The paper by Sandra Mitchell for example, is not unlike one by Alwyn Scott. Both talk about the causal properties of the parts of some system being highly nonlinear and dependent on the whole network such that teasing out the relationships between some input and output becomes problematic.

Regardless of the fact that there are a wide, multitude of papers covering this topic with differing opinions, all of those papers can generally be seen to fit into one of 2 categories. There’s a third category to be honest. That category is one where the author of the paper isn’t aware of, or doesn’t understand how their concepts of causation fit into the existing 2 primary categories of causation.

Before getting into these 2 categories, I want to respond to a question you have. When we look at a complex system with varying levels, we can make generalizations at each level and we might even believe that those generalizations are equivalent to natural laws. Fodor for example takes on this question. I mention this paper because it’s been cited over 1100 times. His conclusion is applicable to biology though he does focus on causation in economics. The point he makes is that there are generalizations in the higher level sciences that are irreducible. By irreducible, he means there are no ‘bridge laws’ between some higher level generalization or law and lower levels. One of his examples regards Graham’s law (in economics). Fodor insists for example that Graham’s law is causally efficacious, and since it is not reducible via bridge laws, it must be a form of higher level physical law. The question is whether or not higher level generalities can be causally efficacious in some way. The other question is how one might derive the higher level law or at least, make some prediction about the system in question given the complex interactions within (ie: when moving between levels).

As mentioned, there are 2 categories of thought as I see it. Those that embrace ‘weak emergence’ and those that embrace ‘strong emergence’. See Chalmers for a good review of these terms. Weak emergence might be best understood by looking at how numerical analysis is done. It may seem strange to talk about numerical analysis, such as those performed on fluid systems, structures, or at the compartment level of neurons, but the philosophy behind those anaylses are pertinent to the discussion. Even chemical interactions combined with thermodynamics, heat transfer and fluid motions can be combined today using multiphysics software. The reason these numerical analysis tools are so powerful regards the basic philosophy behind how they work. Imagine a very small volume of space that can be described assuming classical mechanics. In mechanical engineering we call this a control volume. Inside that volume of space you might imagine chemical reactions taking place, such as inside a cell. You might imagine there being chemical pathways that convert one set of molecules to another while absorbing or rejecting heat and work. You would need to keep track of that energy and examine heat transfer between that volume of space and adjacent volumes. You might also imagine the movement of molecules across some surface that was wrapped around this control volume that we call a control surface. When you create a model of something so that you can do a numerical analysis on it, that’s how it’s done. The model assumes some volume of space, looks at interactions inside and moves heat or energy, forces or whatever, to the adjoining volumes of space, the adjoining control volumes. Those causal actions propogate through the system at a rate that is a function of the means of propogation.

The philosophical community has provided terminology for most of this. The type of phenomena which emerges from all these interactions is called “weak emergence” and that term is best defined by Bedau. So numerical analysis is done for example by engineers who model fluids and structures and by neuroscientists who build neurons out of compartments and cortical columns out of neurons, etc… Markram for example, wants to build up a model of the cortical columns and hopefully an entire brain using a numerical analysis program called Neuron. Note also that numerical analysis done in this way follows the philosophy of the separability of classical mechanics which is best desribed by Richard Healey who has spent much of his career defining how classical mechanics follows seprability and how that differs from quantum mechanics which follows nonseparability.

So that’s the end of the first category. The second category claims what the first category does not. Unfortunately, there’s a lot of confusion in the literature around this. If you want to better understand what the confusion is about, try a paper by Craver and Bechtel. It’s quite good actually and is published in the journal “Biology and Philosophy”. The term “top-down” is not as well defined as the term “downward causation” so I’m going to use the term “downward causation” which means that phenomena described by some higher level exerts a causal efficacy over the lower levels that make up the higher level. Strong emergence is different from weak in that it requires downward causation.

Downward caustion requires that lower levels of nature are causally influenced by higher levels. For example, Fodor’s observation that “Graham’s law” dictates how money is spent is an example of downward causation. Although Fodor himself doesn’t seem to recognize this, others do including for example, Jaegwon Kim. People who believe that higher levels of physics causally influence lower levels are actually proponents of downward causation, though this isn’t clear to many. Authors of the papers you provided would be from those who either don’t understand downward causation (category 3) or who support it (category 2). It isn’t hard to understand why there’s both support for and confusion over downward causation. It appeals to our intuitions about how nature works first of all (albeit, not everyone’s intuitions). We like to believe that there are an array of sciences with some hierarchy such that each level of science is somehow independent of other levels as described by Anderson for example. I’m not sure if Anderson is legitimately in category 2 or 3 but he is well cited and helps to define the intuitive problem people wrestle with as does Fodor. There’s confusion over it (category 3) because people simply don’t understand how higher levels have to interact with lower levels to create the type of phenomenon they intuitively expect. If downward causation exists, it would help explain why in biology for example, there is a failure when moving between levels. We can’t move between levels in this case, not because it is too complex, but because the higher levels causally influence the lower levels. In other words, our control volume (some volume of space) is influenced by something other than what local, causal interactions exist locally (at the control surface).

Hopefully that helps boil down the opinions that are out in the literature. I’d suggest looking at those papers with the above 3 categories in mind. Ask yourself if this person is proposing “weak emergence” or “strong emergence” with downward causation or if they simply don’t make the distinction.

Anderson: http://robotics.cs.tamu.edu/dshell/cs689/papers/anderson72more_is_different.pdf
Bedau: http://people.reed.edu/~mab/papers/weak.emergence.pdf
Chalmers: http://consc.net/papers/emergence.pdf
Craver: http://link.springer.com/article/10.1007/s10539-006-9028-8#
Fodor: https://ethik.univie.ac.at/fileadmin/user_upload/inst_ethik_wiss_dialog/Fodor__J._1974._Special_sciences_in_Synhtese.pdf
Kim, Jaegwon. "‘Downward causation’in emergentism and nonreductive physicalism." Emergence or reduction (1992): 119-138.
Scott: http://redwood.berkeley.edu/w/images/5/5e/Scott2004-reductionism_revisited.pdf
 
Last edited by a moderator:
  • #11
mechanism #5:
Statistics and/or clumping

'Passive statistics' (classic probability):
Imagine a jar with two spheres, one green and one blue. Before picking any sphere, they both have equal probability to be any color. After picking the first sphere the colors of both spheres are fixed, which means that the color of sphere 2 is entirely dependent on the color of the sphere 1.
Imagine a jar with infinitely many spheres such that half are green and half are blue. Both prior and posterior probabilities are equally distributed between green and blue which means that the color the next sphere is entirely independent of the color of the sphere before it.

So we have a 'gradient of independence' from the trivial cases of 2 spheres (dependent) to infinite spheres (independent). Given sufficiently large numbers of samples, some parameter of interest (in this case, color probability) gains independence from some other parameter (individual observations, sampling order). In this sense independence is 'bought' at the 'cost' of sample size.

'Active clumping' (of different things into one category)
Imagine a jar with some large number of spheres such that each sphere has a slightly different radius. We would like to predict the size of the next sphere to emerge (with greater than 1/n probability). This is intractable given that no two spheres are exactly alike. However, we can arbitrarily define in our system some threshold T above which the spheres belong to category 'big' and below which the spheres belong to category 'small'. Which means we have now artificially created a system where we are guaranteed a certain 'category distribution' independent of the actual sizes of the spheres. In this sense we are leveraging an already existing large sample size to 'buy' independence at the 'cost' of accuracy and details. (One biological implementation of this is given by neuronal action potentials which either do or do not fire in response to an analog input)

My intuition is that the second (active) sense, in addition to mechanisms 1-4 above (and others like frequency as suggested by Danger) is fundamental to a lot of how biology actually works and is also fundamental to a lot of those cases when biology doesn't work, especially when you do this sort of thing on many different levels in parallel. For example there is a wealth of literature on evolutionary wars between hosts and viruses a lot of which can be abstracted into
- host makes arbitrary decision necessary to solve some unrelated problem (say, importing food)
- virus doesn't care about said problem and so manipulates the decision to it's own advantage (by, say, hijacking a signalling pathway)
- host attempts to do complicated things to defend itself without touching the original decision (as the decision is now part of the host's basic infrastructure)
- and so on
 
Last edited:

1. What is conditional independence?

Conditional independence is a statistical concept that refers to the relationship between two variables in a system or process. It means that the value of one variable does not affect the probability of the other variable occurring, given the value of a third variable.

2. Why is conditional independence important in scientific research?

Conditional independence is important because it allows scientists to simplify complex systems or processes by breaking them down into smaller, more manageable parts. It also helps to identify relationships between variables and make predictions based on statistical analysis.

3. How can conditional independence be tested or measured?

Conditional independence can be tested by using statistical methods such as correlation analysis or regression analysis. These methods can help determine the strength and direction of the relationship between variables and identify any potential conditional independence.

4. What are some examples of systems or processes that exhibit conditional independence?

Some common examples of systems or processes that exhibit conditional independence include weather patterns, stock market trends, and social media interactions. In these cases, the occurrence of one variable does not significantly impact the probability of the other variable occurring.

5. Can conditional independence change over time?

Yes, conditional independence can change over time as variables and their relationships within a system or process evolve. It is important for scientists to continuously monitor and analyze these changes in order to accurately understand and predict the behavior of the system or process.

Similar threads

  • Programming and Computer Science
Replies
4
Views
346
  • Engineering and Comp Sci Homework Help
Replies
1
Views
6K
Replies
4
Views
4K
Replies
4
Views
849
Replies
4
Views
73
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
Replies
5
Views
584
  • Sci-Fi Writing and World Building
Replies
21
Views
1K
  • General Engineering
Replies
4
Views
2K
Back
Top