What ever gave you that idea?
In the astrophysics forums, I talk a lot about galaxy counts as evidence for baryon concentration. Also, if we were talking about double-blind randomized drug trials, I'd take that over anecdotal evidence for the effectiveness of a drug.
But we aren't talking about galaxy count data and double-blind randomized drug trials.
*IN THIS PARTICULAR SITUATION*, I've found that anecdotal data is far, far more useful than survey data. The problem with using survey data is that in general *FOR THE TOPIC UNDER DISCUSSION*, I've found that the surveys tend to be extremely poorly set up so that I already have huge amounts of suspicion, and the surveys rarely provide enough data so that you can critique the methodology. Also, even when done right, surveys usually don't provide useful information, and what you find out from them is often irrelevant.
One thing about anecdotal data, you can usually get enough deep information to cross-check for reliability. In situations where "professionals" do surveys, you can also get that sort of information, but a lot of surveys are done by amateurs, who invariably don't provide enough information about how the survey was done, and even when you have a good survey, it may not contain the information that you want.
For example, I want to figure out how my wife or my boss will react if I do something, I do that based on anecdotal evidence. Now I could do a survey of 1000 wives or 1000 bosses, and *even if it was a valid survey* that information would be totally useless to me.
No that's not how it works. There are a ton of techniques to make sure that when doing qualitative research, what you are doing is both reliable (i.e. reproduciable) and valid (i.e. measuring what you think you are measuring). For example, I gave a transcript of this thread to a social scientist, they would start "coding" it. They would break it up into themes, and then count the number of times I mention something, Then, they would give it to some other person and then would code it. At that point, you look at the results, and if two people read what I say and come to different conclusions about what I meant, then it's not reliable, whereas if you have two people read what I say and then code it in the same way then there is something there.
Also, if a social scientist talks to a young earth creationist, he is interested in their **beliefs** about geology, not geology itself. I've found those techniques to be useful, because if I just tell them that the earth is 6000 years old and they are an idiot for believing that, then I'm not going to convince them, but if I can get inside their mind, and understand *why* they think the world is 6000 years old, then maybe I can convince them otherwise.
[QUOTE[I can easily describe scenarios where your anecdotal situation is really not reflective of the general trend. Your sampling of one single data point could easily be the exception rather than the rule.[/QUOTE]
Sure, but sometimes that's what you want. If you have an bad inner city school, and 99.9% of the people in it end up in menial jobs and one person makes it to Harvard, that's may be the one person you want to research *BECAUSE* he is the exception.
Similiarly, if I wanted to a study on careers of astrophysics Ph.D.'s, I probably want to interview Brian May. It's not that it's typical for astrophysics Ph.D. to turn into rock band drummers, but *because* Brian May is unusual, he is worth studying.
One thing about people is that everyone is exceptional. People are not electrons. Every electron is exactly the same as every other electron, but every person is different from every other person. This is a big headache for educational studies. What you want to do is to have two classrooms that are exactly the same except for one variable. You *can* do this to some extent with drug trials, but this turns out to be *impossible* for classrooms or workplaces. And even if you could do this, there are hundreds maybe thousands of variables that interact in complex ways.
No it doesn't. If I ask all of the alumni of Phil Anderson, I learn something about the alumni of Phil Anderson, and that could be useful because I'm curious *why* alumni of Phil Anderson end up with exceptional careers. Is it because he selects people ahead of time? Is it because he is particularly good with political connections? Is it because he does something "magical", and if so is it something that can be reproduced? Is it just dumb luck?
I suspect that by asking these sorts of questions, I'm going to learn 100x more about the Ph.D. system that if I send out a survey to 100 Ph.D.'s.
[QUOTE[So if you think you can find fault in statistics, I can easily counter that by finding faults to your methodology as well![/QUOTE]
I think that statistics *in this particular situation* tend to be unreliable or useless. One reason I'm a stickler about this is that I'm married to an educational researcher so I know what a proper survey looks like.
Also, a lot involves making do with what you have. I'd *love* to get statistical distributions of neutrino emissions from supernova or seismology results from stars, but I can't. If I had a wayback machine, I'd love to rerun the big bang 1000 times and see what comes out, or go back in time, change my life, and see what happens.
But that's not available so I have to make do with what I have.