Does Function Composition Automatically Restrict Domains to Match Ranges?

Rasalhague
Messages
1,383
Reaction score
2
Is composition of functions defined in such a way as to automatically restrict the domain of the outer function, if need be, to the range (image) of the inner one, so that it's always possible to write f^{-1} \circ f\left ( x \right ) = x, providing f is an injective function? Or is composition only defined for functions that are already compatible in the sense that the domain of the outer function must be the range of the inner one, so that f must be bijective to have an inverse?

The section Inverses in higher mathematics here says definitely the latter, but is the former idea often used in practice? For example, does this statement of the chain rule for one variable need extra caveats for

\left ( f \circ g \right )' = \left ( f' \circ g \right ) \cdot g'

to be true in general? Specifically, would it only be true if f was restricted to the domain of g (rather than the condition being merely that f is defined on an interval of which the range of g is a subset), and differentiable on the range of g?
 
Mathematics news on Phys.org
Rasalhague said:
Is composition of functions defined in such a way as to automatically restrict the domain of the outer function, if need be, to the range (image) of the inner one
Usually/in practice: the composition of two functions is defined if the image ('range') of the one is contained in the domain of the other.

But in some (formal) contexts, it is what you say: e.g. in category theory the composition of two arrows is defined iff the codomain of the one is equal to the domain of the other (in an abstract category, the domain and codomain of arrows are not sets, so this makes sense).

As often, people are a bit sloppy, or - depending on your exact definition of 'function' - it makes no difference: any function f:X\to Y can be regarded as a function f:X\to Y' where Y' is any set that contains Y, and we usually don't distiguish them explicitly. [A lot of your questions seem to be about mathematicians' sloppiness :)]
Note that this occurs already at the level of relations: a relation R from X to Y is subset of the product X x Y. But such an R is at the same time a subset of X' x Y' for any X' resp. Y' that contains X resp. Y, so it is at the same time a relation from X' to Y'. Unless you define it as an ordered triple (X,Y,R), and then it becomes sloppiness to ignore the difference.

The composition of two functions is a special case of the composition of two relations (recall that a function is by (the usual) definition a special kind of relation): Let X,Y,Z be sets, R a relation from X to Y, and S a relation from Y to Z. In other words, R is a subset of X x Y, and S is a subset of Y x Z. Then their composition is

S\circ R:=\{(x,z)\ |\ \exists y\in Y: (x,y)\in R\text{ and }(y,z)\in S \}\subseteq X\times Z,

which is a relation from X to Z. This makes sense for any two such relations. This compostion might be empty. Again, you could be more flexible about the codomain of R and domain of S; why wouldn't we allow R and S to be any two relations whatsoever, and define

(S\circ R)_{new}:=\{(x,z)\ |\ \exists y\in \text{codomain}(R)\cap\text{domain}(S): (x,y)\in R\text{ and }(y,z)\in S \}\subseteq X\times Z?

Of course, in the context of functions, we have to require more if we want the composition relation to be a function. Indeed, suppose R:X\to Y and S:Y'\to Z are any two functions. For the relation (S\circ R)_{new} to be a function from X to Z, we need for every x in X the existence of a unique z in Z such that (x,z)\in(S\circ R)_{new}. Fix x in X. As R is a function, there is a unique y in Y such that (x,y)\in R. So this is the only y we can consider in the definition of (S\circ R)_{new}. And now we need a unique z in Z such that (y,z)\in S. As S is a function, this happens iff y is in the domain Y' of S. So with this definition, the relation composition of two functions R:X\to Y and S:Y'\to Z is a function (from X to Z) if and only if Y' contains Y. Yay, this is in agreement with what I said is done in practice.

, so that it's always possible to write f^{-1} \circ f\left ( x \right ) = x, providing f is an injective function?
I recommend against the use of the notation f^{-1} for the left inverse if it is not also the right inverse, i.e. if the function is injective but not surjective.
Or is composition only defined for functions that are already compatible in the sense that the domain of the outer function must be the range of the inner one, so that f must be bijective to have an inverse?
I think this depends on your definition of 'inverse' rather than of 'composition'. Usually, f:X\to Y is called invertible if there is a function g:Y\to X such that f\circ g is id_Y and g\circ f=id_X. (This is also the general definition in category theory.) You could of course just replace Y with f(X) in this definition.
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top