The core of Bayesian statistics is Bayes' rule, which is an elementary fact from elementary probability theory. Posterior odds = prior odds times likelihood ratio. Let H and K be two hypotheses, and D be some data. Then notice that
P(H | D) / P(K | D) = P(H) / P(D) * P(D | H) / P(D | K)
P(H) / P(K) is called the prior odds: your initial relative degree of belief in H versus K
P(D | H) / P(D | K) is called the likelihood ratio or Bayes factor: the ratio between the chances of getting data D, under the two hypotheses H and K
P(H | D) / P(K | D) is called the posterior odds: your final relative degree of belief in H versus K, after seeing the data D.
If you don't like odds (ratios of probabilities), we can state it in terms of straight probabilities as "posterior is proportional to prior times likelihood".
So for instance if you start by believing it 100 times more likely that H is true than that K is true, but then you observe data D which is 1 000 000 times more likely if K is true than if H is true, then you should revise your beliefs about H and K, and the correct revision is: you now believe it 10 000 times more likely that K is true than that H is true.