A random variable is defined as a function from one set, called a sample space, to another, called an observation space, both of which must be underlying sets of probability spaces. But often when people talk about a random variable - as in definitions of a particular, named distribution, such as the binomial distribution - they make no explicit mention of the domain or codomain of the random variable, or any rule to specify how this function associates an element of the one with an element, or elements, of the other. In this case, would a good rule of thumb be to assume they mean the identity function on a suitable sample space? (...so that the same set is being used for sample space and observation space.) When people write, for example, E[g(X)], for expectation, are they stating that E is a function of the composite random variable g o X, where X is to be understood as the identity function on the sample space, unless otherwise stated? Or is it that the exact details of, at least the domain of X, are usually irrelevant to applications? The notation E[X] makes it look as if expectation is a function of a random variable, but sometimes people also talk about the "expectation of a distribution" (such as the binomial distribution). Is E in fact a function of both the random variable and the distribution, E[X,Q]? If X is held fixed, can a given expectation be made into any other expectation by a suitable choice of distribution Q; and if Q is held fixed, can the expectation be made into any other by a suitable choice of random variable X? Or are these best thought of as separate, equivalent ways of formalising the concept of expectation, so that, in one formalism, E is a function of random variables (in which all the necessary information is encoded), while in the other way of thinking, E is a function of distributions (probability measures)?