chiro said:
Provided that you have the right measures and that the values for each 'distribution' are finite, then wouldn't you still get a final 'distribution' that satisfies the axioms and produces finite results?
I think the answer to that is yes, in practical terms. From a rigorous point of view, we would have to define what "distribution" is in measure theoretic terms to sort it out.
The way ordinary probability texts sidestep measure theory is to use specific methods of integration. They use Riemann (or similar) integrals for continuous random variates and for discrete distributions they use summation. From the point of view of measure theory, both of these methods are the beginnings of measures.
It is easy to invent examples of random variates that aren't purely continuous or discrete. For example, define the random variable X (in practical terms) as follows. Flip a fair coin. If the coin lands heads then X = 1. If the coin lands tails then let X be the result of a draw from a uniform random variable u on the interval [0,2]. Practical people know how to handle the distribution of X through a mixture of Riemann integration and summation, but you can't write a simple exposition of a theory of distribution functions and densities that handles this type of situation unless you get into forms of integration and differentiation that are more general than Riemann integration and summation.
If we look at a simple definite integral from calculus \int_a^b f(x) dx, we can pretend f(x) is "given" and the definite integral can be regarded as a function whose domain is the collection of sets of the form [a,b] and whose range is the real numbers. The reason it is isn't a measure on the real numers is that it doesn't produce an answer on all the sets in a sigma algebra on the real numbers. You have to struggle to extend the definition of integral in order to get results on all the weird sets than can crop up in a sigma algebra.
My education went from Riemann integration to measure theory with only a brief stop at the Riemann-Stieltjes integral, but I think that type of integration is one way of handling the mixture of continuous and discrete random variates.
The outlook of measure theory is "Let's assume I've solved all the integration theory. We aren't going to worry about how I did it, or whether there is any underlying function f(x) that I'm integrating over this collection of sets, or whether I'm using a mixture of Riemann integration and summation. We'll assume I have a measure, so if you give me a set in the sigma algebra then I can assign it a number and the way this function behaves on the sets resembles the way that simple theories of integration behave on the sets they can deal with."
If you want to go from measure theory to probability measures to something resembling probablity densities or cumulative distributions, you need more theoretical machinery. My point is that densities and distributions are not "built-in" to the basics of measure theory. A measure is like a "black box" process. You can speculate that it comes from integrating a specific function by using a specific method of integration, but nothing in the definition of measure guarantees that this is how it operates.