Is Marilyn Vos Savant wrong on this probability question?

  • Thread starter Thread starter CantorSet
  • Start date Start date
  • Tags Tags
    Probability
Click For Summary
The discussion centers on a probability question regarding the likelihood of two specific sequences resulting from rolling a die 20 times. Marilyn Vos Savant asserts that while both sequences are theoretically equally likely, a sequence of mixed numbers is more probable in practice due to the concept of entropy, which measures randomness and information content. The first sequence, consisting entirely of 1's, has low entropy, indicating less randomness, while the second sequence has higher entropy, suggesting a more typical outcome for random rolls. Participants debate the relevance of entropy in determining the likelihood of the sequences, with some expressing confusion about its application in this context. Ultimately, the conversation highlights the distinction between theoretical probability and practical expectations based on randomness.
  • #121
andrewr said:
I agree with you; Marilyn's responses appear to be characteristically confusing; I find myself wondering if she is purposely trying to trip up certain intelligent people...

Of course, that wonder is just an automatic reaction of mine.. and not a considered opinion. Upon thinking about her response a bit more, I notice that Marilyn evokes thoughts (in my eyes) to normal but confusing "women" conversations.

I don't think it uncommon for men, like myself, to infer different priorities of meaning than the women actually involved in such men-exclusive conversations. (I note Loren hasn't yet responded again, and I am only noticing one other female respondent entering the melee... GOOD for her! )

I do agree that Marilyn has a command of the English language which makes her somewhat liable to judgment; eg: the IQ tests she took were heavily biased by men writers at the time...

However, I know that judging her wrong based on a manly interpretation (solely) is likely an injustice (which is why I don't personally care to do it ? ).

Marilyn might be careless, tired, annoyed with a leading question, or something along those lines; However, if even the original auditor (?Loren?) really did not understand Marilyn's nuances -- then Marilyn has made a true "faut pas" where she ought to know better *intuitively*.

Well, the cynic in me believes that controversy, however artificial, is good publicity for her site , and for her, but I don't have any real/hard evidence to support the belief that she's purposefully being ambiguous.
 
Physics news on Phys.org
  • #122
I apologize for being one of the people who, by their error in solving her probability problem, helped give prominence to Ms. Vos Savant.

Unfortunately she has parlayed this incident into a notoriety that is mostly undeserved, at least in regard to mathematics, of which she is largely ignorant.

being smart, even really really smart, does not translate into understanding an old and complicated subject.

here is a review by a friend of mine of one of her almost worthless books on a mathematical topic of some interest.

http://www.dms.umontreal.ca/~andrew/PDF/VS.pdf
 
  • #123
No problem; I have fallen thru plenty of Mathematical potholes myself.
 
  • #124
chiro said:
Hurkyl, do you know what likelihood techniques and parameter estimation is all about?
Yes, actually. They don't apply to the question we're considering.

If we had a model of how the person was choosing the fake results, we could take this sample (and ideally many more) and work out a posterior distribution on the parameters of the model we don't know.

But that's not what we're doing. We're faced with two alternatives A and B, and we need to decide whether P(A is real) > P(B is real), conditioned on the fact we are faced with {A,B}. I.E. we need to determine which of:
  • P(B would be generated as fake, given that A was rolled)
  • P(A would be generated as fake, given that B was rolled)
is larger. If we had a model of the fake is generated or some other way of estimating these probabilities, we could apply that to infer which is more likely to be real. If we don't have such a thing, then we have to come up with one.


You don't need to bring in the Central Limit Theorem or anything else:
My CLT example is one of someone using a statistical tool wrongly, and deriving nonsensical results.
 
  • #125
Can't believe I read the entire thread.
 
  • #126
Hurkyl said:
Yes, actually. They don't apply to the question we're considering.

This is where I disagree.

The point of the last example is that we don't know the process and therefore don't know the distribution. You can't just calculate probabilities for an unknown process.

When you have this example you need to use an estimator to estimate the parameters and to do this you need to use the data.

Again you can't just calculate probabilities because you don't actually know them: you need to make an inference of what they could be based on the data that has been sampled.

We assume that the process has six probabilities that add up to 1 and that we have a completely independent process, but beyond that we don't know anything: we can only infer what the actual characteristics of the process are by looking at the data and making some kind of inference: not the other way around.
 
  • #127
Loren Booda said:
And good for him -- Loren. One less woman.

I must have some kind of dyslexia in trying to respond to posts.

I don't always agree with Marilyn, and this puzzle's answer I also find non-intuitive -- but similar to the Monty Hall paradox.

:blushing:

All the Loren's I know are women, oh well! I wonder how many Lorens Marilyn knows...

Marilyn's commentary and the Monty Hall problem: as far as I know, are identical.
There was an extension, if I remember correctly to 4, 5, 6 shell games -- but that's really trivial in any event... It just shifts the probability down a notch for each shell.
 
  • #128
chiro said:
This is where I disagree.

The point of the last example is that we don't know the process and therefore don't know the distribution. You can't just calculate probabilities for an unknown process.
Right. If you don't have any priors, you can't do statistical inference. You can gather and analyze data, and tabulate whatever evidence you can extract from the data, but you cannot use that evidence to infer whether some hypothesis is more likely than some other hypothesis.

You have to have prior probabilities if you want to do statistical inference -- even if it's just a blind assumption of uniform priors of some sort.
When you have this example you need to use an estimator to estimate the parameters and to do this you need to use the data.
You said we don't know the process -- we don't have any parameters to estimate! :-p

If you have a prior assumption about the data generation -- e.g. that it's generated by some parametrized process and you have flat priors on the parameters -- then we could try to estimate parameters. We could then take the parameter with the highest posterior probability and see what distribution that produces on the thing we're actually interested in...

but then we would be doing things wrong. When you string together ideas in an ad-hoc fashion, rather than in a way aimed at solving the problem you're actually trying to solve, you get poor results.

If we remember what we're actually trying to solve, we would know to factor in information from all parameters, and could do so directly without having to deal with parameter estimation as an intermediary:
P(A \mid O) \propto \sum_\theta P(A \wedge O \mid \theta) P(\theta)
Where A is the hidden value we're trying to predict, O is the observation we saw, and \theta is the parameter. The most likely value of A is the one that maximizes the sum on the right hand side.

(the constant of proportionality is the same for all A)

Incidentally, in the special case that, for each \theta, A and O are independent, this simplifies to
P(A \mid O) = \sum_\theta P(A \mid \theta) P(\theta \mid O)
(equality, this time) One could interpret this as saying, in this special case, that we can get the probability of A given our observation by first using O to get posterior probabilities for \theta, and then remembering to incorporate information from all \theta, weighted appropriately.
 
Last edited:
  • #129
Hurkyl said:
You said we don't know the process -- we don't have any parameters to estimate! :-p

We know that there are six probabilities and that each probability is to be assumed independent for each trial. We have a model, but we don't have the distribution: there is a difference.

It's not a fair characterization to say what you said: we know the probability model for a coin flip and we use the data to get a good statistical estimate for P(Heads) and P(Tails) by using an appropriate procedure.

You can't just say things like that.

The thing is that typically we assume independence for each trial, which ends up simplifying the general case very nicely. By assuming each trial is completely independent we don't have to use the complex general procedures that we would otherwise have to use. The assumption P(A and B) = P(A)P(B) and P(A|B) = P(A) for all appropriate events A and B makes it a lot easier.

We know what the model is, we just don't know its parameters and the point of the exercise is to estimate them.

Saying that we don't have any parameters to estimate is just really ignorant.
 
  • #130
chiro said:
You can't just say things like that.
Then why did you? You can't complain about a proper analysis of the problem because "we don't know the process" and then turn right around and justify your sloppy approach by making very strong assertions about the process.
 
  • #131
And your analysis doesn't even look like the problem we were considering anyways. Did you start considering a very different problem?

For reference, the problem was essentially:

We are given two 20-long sequences of numbers. One of them is "real", generated by rolling a fair die. One of them is "fake", selected by our opponent. Our goal is to guess which sequence is real.​
 
  • #132
Hurkyl said:
Then why did you? You can't complain about a proper analysis of the problem because "we don't know the process" and then turn right around and justify your sloppy approach by making very strong assertions about the process.

Well if you want the absolute explicit description, then we know the model (or we assume one) but we don't know the parameters. Is that ok?

Our model is that every roll has 6 possibilities. Furthermore we assume that every roll is independent. This is a multinomial distribution with 6 choices per trial.

This is the model we assume for a die balanced (all probabilities per trial are equal) or not (all probabilities are not equal per trial).

Now a balanced die is assumed to have all probabilities equal per trial (1/6). An unbalanced one is not.

Marilyn said in her statement that if someone rolled all 1's out of her view and then was told the result she would not believe it came from a fair die.

This translates to all probabilities per trial (or throw) are the same 1/6.

Now if we talk about a die, whatever the probabilities are, if we were going to try and estimate the parameters of the die, we would for all practical purposes assume that each throw is independent and has the same distribution.

We don't know what the distribution is, but we have for practical purposes added enough constraints to be able to figure them out.

We know that there are only six possible choices per throw: no matter what can happen this has to be true. We assume independence of each throw or trial. This simplifies all the conditional statements about the model and makes it very manageable.

Now we get the data and we estimate the parameters based on this model. We look at the data and not surprisingly if we did a likelihood estimation procedure for the parameters given this data, we would conclude that under the constraints of the model the process that generated the data (i.e. the die) was not a balanced one (i.e. all probabilities are the same).

The assertions of the process are made on the grounds that each trial/throw is independent. The six possibilities per trial are definite since there really are only six possibilities per trial.

Would you use another set of constraints for this model? If so why?
 
  • #133
Hurkyl said:
And your analysis doesn't even look like the problem we were considering anyways. Did you start considering a very different problem?

For reference, the problem was essentially:

We are given two 20-long sequences of numbers. One of them is "real", generated by rolling a fair die. One of them is "fake", selected by our opponent. Our goal is to guess which sequence is real.​

It does! I'll post the specific problem that I am referring to. Here is a word-for-word quote in the original post:

In theory, the results are equally likely. Both specify the number that must appear each time the die is rolled. (For example, the 10th number in the first series must be a 1. The 10th number in the second series must be a 3.) Each number—1 through 6—has the same chance of landing faceup.

But let’s say you tossed a die out of my view and then said that the results were one of the above. Which series is more likely to be the one you threw? Because the roll has already occurred, the answer is (b). It’s far more likely that the roll produced a mixed bunch of numbers than a series of 1’s.

I'm referring to the bolded part. Marylin is given data for a process which we assume has the properties of the die (hence my assumptions above) and she has to make up her mind whether the die is fair (all probabilities = 1/6) or not fair (they don't all equal 1/6).

Now again we can't assume that all probabilities = 1/6. We are given the constraints for a probability model (6 events per trial, all trials independent) and we have to take the data and estimate intervals for the parameters (i.e. 5 different probabilities since the 6th is the complement).

We can't just assume the data came from a fair die: we have to get the data and use that to estimate the parameters of a multinomial distribution.

The assumptions that lead to the constraints are based on some well accepted properties for these kinds of processes: coin flips, dice rolls and so on. I didn't just make this stuff up: it's based on independence of events and many people agree (including statisticians) that while it is not a perfect set of constraints, it suits its purpose rather well.

Now her terminology is not that accurate with regard to a 'mixed bunch of numbers', but you could formulate that mathematically and show that her argument holds a lot of water.

So again to conclude: Marilyn gets the data for a dice roll with each digit being 1,2,3,4,5 or 6. She gets a big string of 1's. She has to decide whether this data came from a fair die (all probabilities = 1/6) or a not so fair die (complement of this). Using some accepted properties of things like dice rolls (independence) she has a multinomial model for the data and needs to estimate its parameters. With all 1's unsurprisingly she rejects the hypothesis that the data that produced the process was something that would be defined as a fair die and from that says what she said.
 
  • #134
Hurkyl said:
Right. If you don't have any priors, you can't do statistical inference. You can gather and analyze data, and tabulate whatever evidence you can extract from the data, but you cannot use that evidence to infer whether some hypothesis is more likely than some other hypothesis.

You have to have prior probabilities if you want to do statistical inference -- even if it's just a blind assumption of uniform priors of some sort.

If you want to go into the Bayesian way of thinking, then assume the prior is flat. By doing this we don't get any information that would otherwise give us a better advantage for parameter estimation of the multinomial distribution.

If you have a prior assumption about the data generation -- e.g. that it's generated by some parametrized process and you have flat priors on the parameters -- then we could try to estimate parameters. We could then take the parameter with the highest posterior probability and see what distribution that produces on the thing we're actually interested in...

but then we would be doing things wrong. When you string together ideas in an ad-hoc fashion, rather than in a way aimed at solving the problem you're actually trying to solve, you get poor results.

If we remember what we're actually trying to solve, we would know to factor in information from all parameters, and could do so directly without having to deal with parameter estimation as an intermediary:
P(A \mid O) \propto \sum_\theta P(A \wedge O \mid \theta) P(\theta)
Where A is the hidden value we're trying to predict, O is the observation we saw, and \theta is the parameter. The most likely value of A is the one that maximizes the sum on the right hand side.

(the constant of proportionality is the same for all A)

Incidentally, in the special case that, for each \theta, A and O are independent, this simplifies to
P(A \mid O) = \sum_\theta P(A \mid \theta) P(\theta \mid O)
(equality, this time) One could interpret this as saying, in this special case, that we can get the probability of A given our observation by first using O to get posterior probabilities for \theta, and then remembering to incorporate information from all \theta, weighted appropriately.

I'm pretty sure I've addressed these issues indirectly but I'll comment briefly on this reply.

If we use the independence/multinomial assumption, a lot of this can be simplified dramatically. Again the multinomial distribution assumption for a die is used because it is a lot more manageable than attempting to factor in all of the conditional behaviour that while may happen, is assumed to be not as important in terms of the descriptive characteristics of the process. I'm not saying that these couldn't occur, it's just that the model is accepted to be a decent enough approximation and this makes life easier.

I am aware of the differences of the Bayesian and Classical approaches with the effects of priors and in this specific case (like when you have only one value appearing in your sample) you can get some weird things when take the classical approach, but is getting sidetracked.

If you want to take into account conditional statements and you can't for one reason or another assume independence like you do in binomial or multinomial distributions, then your likelihood is going to go nuts in comparison to something like these but all I have done is to fallback to these models because it's a well accepted constraint to use, intuitive to understand and simple to make use of, that's all.
 
Last edited:
  • #135
Marilyn said:
But let’s say you tossed a die out of my view and then said that the results were one of the above. Which series is more likely to be the one you threw? Because the roll has already occurred, the answer is (b). It’s far more likely that the roll produced a mixed bunch of numbers than a series of 1’s.
chiro said:
I'm referring to the bolded part. Marylin is given data for a process which we assume has the properties of the die (hence my assumptions above) and she has to make up her mind whether the die is fair (all probabilities = 1/6) or not fair (they don't all equal 1/6).
...
So again to conclude: Marilyn gets the data for a dice roll with each digit being 1,2,3,4,5 or 6. She gets a big string of 1's. She has to decide whether this data came from a fair die (all probabilities = 1/6) or a not so fair die (complement of this).
Did you notice you've significantly changed the problem? I get the impression you've fixated on one method of approaching the problem so strongly that you're having trouble acknowledging any other aspects of the situation.I need you to understand the following five problems are different problems:
  1. Here are two sequences, one real, one fake. The real one is generated by a fair die roll. The fake one is generated by the person asking the question. Which one is real?
  2. Here are two sequences. Given the hypothesis that one of them was generated by rolling a fair die, which one is more likely to be the one rolled?
  3. Here are two sequences. Which one is more likely to be generated by rolling a fair die?
  4. Here are two histograms. Which one is more likely to be generated by rolling a fair die?
  5. Here is a sequence. Was it generated by a fair die roll?
  6. Here is a sequence generated by die roll. Is the die fair?
(I fibbed slightly -- problems #2 and #3 are pretty much the same problem)

The original problem was problem #2. Marilyn modified the problem to turn it into problem #1, and was criticized for confusing problem #1 with problem #4.

You, I think, are trying to solve problem #4 too, but you're solving it by pretending it is two instances of problem #5, but the work you're describing is for solving problem #6.

That last thing is one of the things I'm criticizing. People make very serious blunders by pretending like that. There's one situation I recall vividly: there was a gaming community that was trying to test whether some character attribute had any effect on the proportion of success. They gathered data that supported the hypothesis with well over 99% confidence... but they spent years believing there was no effect because some vocal analysts made a substitution similar to what you did:
We want to test if proportion 1 is bigger than proportion 2, right? Well, let's estimate the two proportions. (Compute two confidence intervals) The confidence intervals overlap, so the data isn't significant.​
Whereas if they had done a test that was actually designed to answer the question at hand (a difference between proportions test), they would have seen the result as very significant.Problem #5 is of a typical philosophically interesting type, because we can't talk about the probability of the answer. We can't even give an answer of the sort "yes is more probable than no". We can, however, choose a strategy to answer the question such that if the true answer is "yes", then we will be correct over, e.g., 95% of the time.But all of that aside, the main thing you're missing about problem #1 (and problem #6) that makes it very different from problems #2 through #5. We're not trying to answer questions about a single "process": we have two different processes, and we're trying to decide which processes produced the output we have. True, it can be difficult to get precise or accurate information about one of the processes, but that doesn't change the form of the problem.

(#6 and #1 are different because #6 has a single output and we're trying to guess which among many processes generated that output, and #1 has two processes with two outputs, and we're trying to say which one goes with which)

____________________________________All that aside, if we tried to use your strategy to solve problem #1, you will have a low probability of success against many people: it is a well-known tendency for humans to generate fake data that is *too* uniform. For example, 66234441536125563152 is 1.5 standard deviations too uniform by the test I did. So, when you take the real and fake data, decide what bias is most likely on the die, and compare to fair, you will pick the overly uniform fake data over the randomly generated data most of the time.

Any question of the form <anything> versus 11111111111111111111 is very unlikely to ever come up except against a human opponent who is likely to make that sort of bluff, so your mis-analysis won't cost you much in this case. However, it will cost you big-time by picking the overly-uniform data too much.
 
  • #136
I haven't read through the thread. But in short, she is right.

Any one valid string of dice rolls is just as probable as any other.

So what are people talking about for 9+ pages?
 
  • #137
Hurkyl said:
Did you notice you've significantly changed the problem? I get the impression you've fixated on one method of approaching the problem so strongly that you're having trouble acknowledging any other aspects of the situation.

If I did that it was completely intentional: like I said in the quote, I focused on what the quote said literally and I interpreted it to be what I said.

I already acknowledged that the other part of the question which has been addressed is fair: I agree with your stance on probabilities being equal and all the rest of that which has been discussed in depth.

Again, I'm not trying to hide anything: I just looked at the quote and interpreted it to mean what it meant in the way that I described.

I thought I made it clear when I was talking about parameter estimation, but I think that perhaps I should have been clearer. I'll keep that in mind for future conversations.

I need you to understand the following five problems are different problems:
  1. Here are two sequences, one real, one fake. The real one is generated by a fair die roll. The fake one is generated by the person asking the question. Which one is real?
  2. Here are two sequences. Given the hypothesis that one of them was generated by rolling a fair die, which one is more likely to be the one rolled?
  3. Here are two sequences. Which one is more likely to be generated by rolling a fair die?
  4. Here are two histograms. Which one is more likely to be generated by rolling a fair die?
  5. Here is a sequence. Was it generated by a fair die roll?
  6. Here is a sequence generated by die roll. Is the die fair?

For what I was talking about I was only concerned with the problems where a sequence was given. Again I thought I made that very clear. I am, as you have pointed out, addressing the last point in the list.

In terms of a sequence being generated by a non-die process (but still has the same probability space), we can't really know this based on Marilyn's circumstance: we have assumed that someone else rolled a dice and therefore we construct the constraints we construct. Does that seem like a fair thing to do? If not why not?

You, I think, are trying to solve problem #4 too, but you're solving it by pretending it is two instances of problem #5, but the work you're describing is for solving problem #6.

I am specifically solving problem 6 yes, but I've outlined my reasoning above.

That last thing is one of the things I'm criticizing. People make very serious blunders by pretending like that. There's one situation I recall vividly: there was a gaming community that was trying to test whether some character attribute had any effect on the proportion of success. They gathered data that supported the hypothesis with well over 99% confidence... but they spent years believing there was no effect because some vocal analysts made a substitution similar to what you did:
We want to test if proportion 1 is bigger than proportion 2, right? Well, let's estimate the two proportions. (Compute two confidence intervals) The confidence intervals overlap, so the data isn't significant.​
Whereas if they had done a test that was actually designed to answer the question at hand (a difference between proportions test), they would have seen the result as very significant.

Yes I have found that statistics and probability has a habit of getting people falling into that trap, and even for people that have been doing this for a long time it still can happen. But with respect to the answer, I thought it was clear what I was saying.

Problem #5 is of a typical philosophically interesting type, because we can't talk about the probability of the answer. We can't even give an answer of the sort "yes is more probable than no". We can, however, choose a strategy to answer the question such that if the true answer is "yes", then we will be correct over, e.g., 95% of the time.

I agree with you on this, but again I wasn't focusing on this.

But all of that aside, the main thing you're missing about problem #1 (and problem #6) that makes it very different from problems #2 through #5. We're not trying to answer questions about a single "process": we have two different processes, and we're trying to decide which processes produced the output we have. True, it can be difficult to get precise or accurate information about one of the processes, but that doesn't change the form of the problem.

(#6 and #1 are different because #6 has a single output and we're trying to guess which among many processes generated that output, and #1 has two processes with two outputs, and we're trying to say which one goes with which)

I never argued about that part of the problem. You might want to look at the response I had for those parts of Marilyn's statement. You made a statement about this and I agreed with you: again I'm not focusing on that part and I made it clear before what my thoughts were.

All that aside, if we tried to use your strategy to solve problem #1, you will have a low probability of success against many people: it is a well-known tendency for humans to generate fake data that is *too* uniform. For example, 66234441536125563152 is 1.5 standard deviations too uniform by the test I did. So, when you take the real and fake data, decide what bias is most likely on the die, and compare to fair, you will pick the overly uniform fake data over the randomly generated data most of the time.

Any question of the form <anything> versus 11111111111111111111 is very unlikely to ever come up except against a human opponent who is likely to make that sort of bluff, so your mis-analysis won't cost you much in this case. However, it will cost you big-time by picking the overly-uniform data too much.

Again, I agree that if a process has specific characteristics then regardless of what we 'think' it doesn't change the process. I didn't argue that and in fact I agreed with you if you go back a few pages in the thread. The process is what the process is.

The big thing I have learned from this is that in a conversation like this (and especially one this heated) we need to all be clear what we are talking about. It includes me but I think it also includes the other participants as well.

I will make the effort on my part to do this for future threads, especially ones of this type.
 
  • #138
SidBala said:
I haven't read through the thread. But in short, she is right.

Any one valid string of dice rolls is just as probable as any other.

So what are people talking about for 9+ pages?

It's become a heated argument with a little bit of a misunderstanding on what other posters are specifically talking about thrown in for good measure :)
 
  • #139
Last edited:
  • #140
She seems a little too certain for someone who had to backpedal from her claims on the proof of Fermat's last theorem being flawed.
 
  • #141
Ah, she still doesn't get it. And I doubt she will, because she's in that situation where she has a correct conclusion with a terrible argument.

Why do I say she has the right answer? Because I have incredibly high prior odds on her choosing 11111111111111111111 as the fake sequence -- much less than the odds on her choosing 44132411666623551133 -- and so it's far more likely that 11111111111111111111 is the fake.
 
  • #142
I think one of the posters there has a good point: Marilyn does not make any testable claims, nor calculations, which makes it (unnecessarily) hard to test her arguments.
 
  • #143
Bacle2 said:
I think one of the posters there has a good point: Marilyn does not make any testable claims, nor calculations, which makes it (unnecessarily) hard to test her arguments.

Yes. I suppose we can all agree on that. If she would describe an experiment unambiguously, then it would be easily resolved what the correct answer was.
 
  • #144
"Yes. I suppose we can all agree on that. If she would describe an experiment unambiguously, then it would be easily resolved what the correct answer was."

I think that depends on whether you are Bayesian or Frequentist. Maybe someone knows more about this.
 
  • #145
Hurkyl said:
Ah, she still doesn't get it. And I doubt she will, because she's in that situation where she has a correct conclusion with a terrible argument.

Why do I say she has the right answer? Because I have incredibly high prior odds on her choosing 11111111111111111111 as the fake sequence -- much less than the odds on her choosing 44132411666623551133 -- and so it's far more likely that 11111111111111111111 is the fake.

Absolutely agree with your statement: The conclusion makes sense under one interpretation (which I have been debating for a while and finally clarified eventually), but her argument just doesn't make sense to me about the past and future.

Remember folks, this is what you get when debates continue and consume people when the issue at hand is vaguely described or not really described at all!
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 25 ·
Replies
25
Views
5K
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 41 ·
2
Replies
41
Views
5K
Replies
3
Views
3K
Replies
2
Views
2K
Replies
5
Views
2K
Replies
6
Views
5K