How can coarse-graining methods be applied in the context of optimal control?

  • Thread starter saviourmachine
  • Start date
  • Tags
    Propagation
In summary, the conversation discusses the topic of belief propagation and its applications in different disciplines such as physics, image processing, and optimal control. The concept of generalized belief propagation, which introduces clusters for more efficient communication, is also mentioned. The discussion also brings up the idea of coarse-graining methods and how it relates to belief propagation. The conversation ends with a suggestion to treat the entire system as one node and consider the individual bases as message passing nodes.
  • #1
saviourmachine
96
0
Dear all,

Let me introduce the topic first, so this thread can be put in the most appropriate subforum. Belief propagation, also called message passing, is a method that communicates messages between nodes on graphical model (for example a Markov random field) and where the messages correspond to specific types of operations (like summation and products) that leave certain global properties in tact. In different disciplines related forms of this algorithms are called the Viterbi algorithm, Turbo (de)coding, Kalman filter, the transfer-matrix approach (physics). If belief propagation converges (which it won't necessarily do in case of loops), it converges towards a stationary point of the Bethe free energy. Belief propagation can be used for indoor localisation, stereo matching, background subtraction, region filling, super-resolution, etc.

Generalized belief propagation introduces clusters. For example when considering an image it is easy to conceive that we do not need only to communicate messages between pixels. It would be cool to communicate messages between regions (clusters or cliques) and only communicate the differences on an individual pixel level if required.

a.) These methods look very much like renormalization group methods in statistical physics. Is there any physicist interested in looking into generalized belief propagation to see if we are not missing some recent results? For example, see [1].

b.) How would coarse-graining methods look like in the context of optimal control? Would it correspond to setting multiple temporal horizons? Or would it try to satisfy the Bellman equation for multiple resolutions of the state variable x? See [2].

I hope we'll have some nice discussions here. Thanks in advance!

[1] Acceleration Strategies in Generalized Belief Propagation (2012) Chen, Wang.
[2] Optimal Control as a Graphical Model Inference Problem (2012) Kappen, Gómez, Opper.
 
Technology news on Phys.org
  • #2
Hey saviourmachine.

When you mention coarse-grained methods, the first question I have is how are you actually doing the resolution structurally?

A few examples would include a completely hierarchical approach to something more in line with say a Fourier approach where you have a signal in some region and each coeffecient gives more information as to give more "finer" details of that particular region.

If you have a Fourier type situation, it means that you can condition things on global characteristics with particular resolution as opposed to those that are completely local like a hierarchical clustering or even a non-hierarchical scheme where all clusters are mutually exclusive.

This has obvious implications for the belief propagation because it means that the message passing doesn't take into account one completely local part of the node/cluster/whatever.

In fact you might want to consider treating the whole thing as one node and consider all the individual bases (i.e. the trig basis vectors in the right space) as the individual message passing nodes instead of only trying to looking at a completely mutually exclusive set of nodes.
 

Related to How can coarse-graining methods be applied in the context of optimal control?

1. What is Recursive Belief Propagation?

Recursive Belief Propagation is a machine learning algorithm used for probabilistic inference in graphical models. It is an extension of the standard Belief Propagation algorithm and is used to efficiently estimate the marginal probabilities of variables in a large and complex graph.

2. How does Recursive Belief Propagation work?

Recursive Belief Propagation works by propagating messages between nodes in a graphical model. These messages represent the beliefs about the values of a particular variable given the values of its neighboring variables. This process is repeated until convergence is reached, and the marginal probabilities of the variables can be determined.

3. What are the applications of Recursive Belief Propagation?

Recursive Belief Propagation has applications in various fields such as computer vision, natural language processing, and bioinformatics. It is used for tasks such as image recognition, language translation, and protein structure prediction.

4. What are the benefits of using Recursive Belief Propagation?

Recursive Belief Propagation offers several benefits, including efficient computation of marginal probabilities in large graphical models, handling of missing data, and the ability to incorporate prior knowledge into the model. It also allows for parallel processing, making it suitable for large datasets.

5. What are the limitations of Recursive Belief Propagation?

Recursive Belief Propagation may suffer from convergence issues in certain situations, such as when the graph contains cycles or when the model is highly non-linear. It also requires careful tuning of parameters and may not perform well with sparse or noisy data. Additionally, it may not be suitable for models with a large number of variables.

Back
Top