b4826161 said:
Is this sufficient to say if I picked n red balls and no blue balls then the probability that the bag contains 50 red balls and 50 blue balls is very small?
No, it isn't sufficient. As henry_m pointed out, you need to know (or assume) something about the probability that the bag has 50 reds and 50 blues
before any balls are drawn in order to draw a conclusion about what that probability is after the results of the drawing.
And furthermore, the probability of 50 red and 50 blue may not be small. Suppose the "bag" is some kind of cell and normal cells have 50 red things and 50 blue things in them. Suppose that among all the cells that have every been observed in labs, none has been found with 100 red things and no blue things. If a test detects 50 red things in a cell, are the other 50 likely to be blue or red? You have to postulate some probability for nature producing a cell with all 100 things red to answer this question, but it conveys the thought that a rare outcome from the test might be more likely than a mutated cell.
Unless you are willing to postulate a prior probability, you cannot say anything about the probability that "the bag contains 50 reds and 50 blues given that 50 reds have been drawn". All you can quantify is the probability that "50 reds have been drawn given that the bag contained 50 red and 50 blue" and the probability that "50 reds have been drawn given the bag contained 100 reds".
This is quite a common situation. When people have an idea and collect data to investigate it, they want to know "What is the probability that the idea is true given the data I collected?" Unless they are willing to use Bayesian statistics (i.e. assume a prior probability) all they can calculate is "What is the probability of the data I collected given that the idea is true?".
When you read statements of statistical evidence, you often see statements that a certain thing is true "with 95% significance" or "at the 95% level". You may see a statement that quantity is between -10.6 and 20.3 "with 95% confidence". They may sound like they are telling you "the probability that the idea is true given the observed data". But what this type of statistics is actually doing is telling you something about the probability of the data given the assumption that certain ideas are true.
In many practical problems, you can come up with an reasonable estimate of prior probabilities. You can also take the "Maximum Entropy" approach to prior probabilities advocated by E.T. Jaynes.