# I Possibility theory

Tags:
1. Feb 8, 2017

### Auto-Didact

Anyone familiar with possibility theory and possibilistic analysis? I came across it during my own research on expert human reasoning/decision making.

Here is a brief description of possibility theory from a recent article behind a paywall.
As I see it, possibility theory offers a novel way to deal with chances which have some particular form of 'vagueness' to them. This makes it an alternative to probability theory, the standard canonical mathematical theory of chances, which focuses on randomness instead of vagueness.

I'm not entirely sure if the two theories are exclusionary with respect to each other, or if they are in some sense just different ways of looking at the same thing, eg. like looking at classical mechanics from a Newtonian, Hamiltonian or Lagrangian perspective.

In any case, possibility theory seems to be a powerful, and more importantly, intuitive tool, which seems to be a lot simpler to learn than probability theory and seems to more closely or more naturally model human reasoning than probability theory does.

2. Feb 8, 2017

### Staff: Mentor

It would help if you provided a peer reviewed reference for us to evaluate the quoted piece.

3. Feb 8, 2017

4. Feb 9, 2017

### Staff: Mentor

Wikipedia has a summary of the theory:

https://en.wikipedia.org/wiki/Possibility_theory

One thing though, it doesn't seem to have generated too much traction in the math community as the Zadeh work is from 1978 and Didier's last paper is from 2006.

5. Feb 9, 2017

### Staff: Mentor

Maybe @Demystifier knows something about it.

6. Feb 10, 2017

### Demystifier

Unfortunately, I don't.

7. Feb 10, 2017

### Staff: Mentor

8. Feb 13, 2017

### Auto-Didact

Last edited: Feb 14, 2017
9. Feb 19, 2017

### stevendaryl

Staff Emeritus
Years ago, I did computer security research using such a theory. The disadvantage is that there is no quantitative aspect; among the possibilities, there is no notion of one being more likely than another.

What my company used it for was a non-quantitative definition of information flow.

The quantitative definition of information flow is that there is a flow of information from Alice to Bob if the probability distribution on possible outcomes visible to Bob is affected by Alice's actions. Shannon's definition of information can then be used to quantify how many bits per second can be transmitted from Alice to Bob.

In lots of cases, though, there is no reliable estimate of the probabilities for various results. We were looking at the interaction of concurrent programs. It's easy enough (at least conceptually) to enumerate all possible execution sequences, but until the program is actually running on a real machine, there is no reliable way to give probabilities to the various possibilities. So we used a "possibilistic" definition of information flow: There is a potential information flow from Alice to Bob if Alice's actions affect the set of possible sequences of events visible to Bob.

10. Feb 20, 2017

### chiro

Hey Auto-didact.

You should think about the constraints you have and use that to determine what is possible within those constraints.

This is one of the "basic" ideas of mathematics.

11. Feb 21, 2017

### Auto-Didact

The non-quantitative aspect that you are describing seems to be in line with the qualitative version of possibility theory, basically providing a basis for decision theory, as described by Didier & Dubois.

There are a few versions of a quantitative version of possibility theory, e.g. a recent version by Sadegh-Zadeh (see my earlier post), but these only seem utilizable given some inherent interpretative linguistic-like vagueness in the data, as occurs in human communication and reasoning.

I'm wondering therefore how vague, in the above sense, the communications or interactions are between programs. If they aren't very vague inherently i.e. if their vagueness is artificial, then I don't think the quantitative version of possibility theory is necessary, let alone preferable or rightly applicable.

I'm not sure to which of my posts this is referring. If you're referring to my own research I alluded to, you have probably misunderstood me: I'm claiming possibility theory seems to be (part of) a mathematical model of the theory of human reasoning about possibilities and how we handle them naturally, as opposed to 'artificial' and learned human reasoning in the form of applied probability theory.

This seems to be true in the decision making of certain expert domains based on a fairly good match with experimental psychological data, where decision making based solely on standard probabilistic methods reliably lead to wrong decisions and therefore they have already been ruled out experimentally as candidate theories.

This doesn't mean that probability theory is false, it merely implies that it is a specific mathematical idealization, possibly even one of many, based on certain axioms; the same is true of possibility theory. Of course, mathematical probability theory was historically developed much earlier than mathematical possibility theory, causing it to be a much richer theory at the moment.

This historical precedence should however have no bearing on the scientific discussion of the ontology of chance, randomness and related concepts, nor grant probability theory a possibly unwarranted sole primacy on these concepts in domains such as psychology and artificial intelligence or even physics, as often is the case today.

12. Feb 21, 2017

### stevendaryl

Staff Emeritus
There is nothing vague about it. You can say precisely (well, given a model of the system as concurrent processes) what Bob learns by observing a sequence of outputs. It's just that the possibilistic semantics doesn't distinguish between something random (once in a million years) and something predictable (99 times out of 100). So by possibilistic reasoning, you can eliminate all information flows by just adding randomness to the system. Of course, that doesn't actually eliminate information flows: If you are on a noisy phone call, you can still communicate, just more slowly (that's what Shannon's theory of information tried to quantify).

13. Feb 21, 2017

### chiro

Usually you assume that you have constraints on certain observations and then you try and reconcile these constraints for consistency to see what is implied.

The more information you have, the better the reliability of the estimate you use on the data.

14. Feb 24, 2017

### MarneMath

Chiro, I too am rather confused what you are actually trying to convey. "Possibility Theory" is basically an extension/application of fuzzy logic that has sporadic development. It seems like you are trying to answer how to determine what's possible and are unaware of this rather obscure field. The odds that the OP finds an expert in this field in this forum are rather low since I figure there's probably like maybe 10 experts in the world? (I have no facts to back that up, but seriously possibility theory is obscure.).

15. Feb 25, 2017

### TeethWhitener

In the interest of rescuing a potentially interesting thread from devolving into personal attacks...

How complicated is it to add modal operators to fuzzy logic? It sounds like this is roughly what Zadeh is doing. (I don't have access to the paywalled papers right now.) Alternatively, how complicated is it to turn the (Boolean) accessibility relation in Kripke semantics into a continuous "membership" function? (That could actually be an interesting question.) You might even be able to treat accessibility and truth values separately as continuous, though you might have to tweak the formalism of the model a bit.

I find it a little odd that none of the info I've found on a field called "possibility theory" even mentions in passing modal logic.

16. Feb 25, 2017

### TeethWhitener

From what I can tell from the Wikipedia article, they're not identical. Possibility theory is basically a variation on the notion of a probability measure where the additivy of disjoint sets:
$$prob(U\cup V) = prob(U) +prob(V)$$
is replaced by a maximum principle:
$$poss(U\cup V) = max(poss(U),poss(V))$$
The formalism allows the definition of necessity and possibility operators as dual, which is the same as in standard modal logic. Dempster-Shafer theory generalizes this further by considering any monotonic increasing function over disjoint sets as a valid "probability" measure.

I don't know anything about alethic modal logic, but standard modal logic treats much of this quantitatively. Maybe the innovation in possibility theory is a set theoretic notion of the accessibility relation in possible worlds semantics? But I wonder if you couldn't achieve the same end by defining a possibility measure as the fraction of possible worlds in which a proposition is true under a given accessibility relation. The advantage of modal logic is that your set of axioms determines your accessibility relation and vice versa, but the disadvantage is that the possible world semantics are a lot of extra baggage that might not strictly be necessary if you're not worried about broad properties of the overall model (e.g., completeness).

17. Feb 25, 2017

### Auto-Didact

You're right about the two theories definitely not being identical. A high degree of possibility does not imply a high degree of probability, or viceversa.

Alethic modal logic refers to modalities of truth, classically those of necessity and possibility; standard modal logic I believe can be much richer in its repertoire of modal operators than simply these two. The key point is that classical modal logic functions as a logic, involving premises, MP/MT, proofs, etc, i.e. it is a way of going from premises to sound/valid conclusions. (Please correct me, if I'm wrong on this!)

Possibility theory, on the other hand, functions quite analogous to probability theory: there are fuzzy sets signifying imprecise possibly partially overlapping concepts, degrees of possibility laying on the unit interval quantifying set membership for each element in a fuzzy set, and possibility distribution functions wherein the various possibilities do not sum up to 1. In order to draw conclusions in uncertainty and so make decisions, a maximum of the minimum joint possibility strategy is employed for each of the decisions under consideration, leading to a best decision, which is after all still a guess at best.

18. Feb 27, 2017

### Auto-Didact

During my further digging for applications of possibility theory, I just came across this book: Fuzzy Mathematics in Economics and Engineering.

And lo and behold, one of the first references I can find in it is to a book by @A. Neumaier on interval arithmetics, which, when extended, happens to be one of the ways of doing fuzzy arithmetics. Perhaps I'm in luck today!

19. Mar 1, 2017

### Staff: Mentor

Several heated posts have been removed. I am reopening the thread, but please keep things exceptionally civil from this point on.

20. Mar 3, 2017

### sysprog

Multi-valued logic (>2 values) allows 1 or more values along the interval between 0 and 1. It's convenient to represent greater likelihood than neutrality with numbers greater than .5, and lesser likelihood with numbers lesser than .5, with .5 assigned to indifference. For some purposes, comparisons of continua are useful. In familiar classical logic we typically look at 0 and 1, but in electronics, it's not atypical to work with 5 values, which may be called low, low-mid, mid, mid-high, and high. Assigning those values discrete numbers, e.g. 0, .25, .5, .75, 1. (or 0, 1/4, 1/2, 3/4, 1), or obtaining sets of more exact measurements within the ranges, enables using the numbers as weights in calculations. Informally we might name the values false, improbable, neutral, probable, and true. The intervals and detents can be defined in various scales and sizes depending on the application.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted