MHB Finding Angle & Distance Between Polynomials: Exploring Inner Product Spaces

matqkks
Messages
280
Reaction score
5
In inner product spaces of polynomials, what is the point of finding the angle and distance between two polynomials? How does the distance and angle relate back to the polynomial?
 
Physics news on Phys.org
one thing is it allows us to say is out of two polynomials, p(x), q(x), which one is "closer" to a given polynomial f(x).

for example, suppose our polynomials are defined on A = [0,1], and we want to know which one of:

p(x) = x2-x

q(x) = x3

is closer to f(x) = x.

so we calculate:

$$|p-f| = \sqrt{\int_0^1 (p-f)^2(x) dx} = \sqrt{\int_0^1 x^4 - 4x^3 + 4x^2 dx} = \sqrt{\frac{1}{5} - 1 + \frac{4}{3}} = \sqrt{\frac{8}{15}}$$

$$|q-f| = \sqrt{\int_0^1 (q-f)^2(x) dx} = \sqrt{\int_0^1 x^6 - 2x^4 + x^2 dx} = \sqrt{\frac{1}{7} - \frac{2}{5} + \frac{1}{3}} = \sqrt{\frac{8}{105}}$$

evidently, q is closer to f than p is.

in general, orthonormal bases are easier to work with (such as the basis: {1,cos(nx),sin(nx): n in N} used in Fourier analysis for signal processing (among other things)), we can focus on the coefficients rather than the basis itself (in particular, the projections of the vectors onto their basis components are easy to calculate). moreover, an orthonormal set of vectors is automatically linearly independent, and forms a basis for its span, and orthogonality (normalizing is just a matter of scale) may be easier to prove than linear independence.

for certain physical systems, orthogonality captures some kinds of symmetry in "eigenstates" (eigenvectors where the vectors themselves are functions representing the state of a system), which again, greatly simplify the complexity of the calculations involved. people DO use this, although not everyone who takes a linear algebra class will have occasion to.

geometry is a powerful way of thinking. it cuts deep. the special orthogonal group in arbitrary dimensions may seem a long way from a simple perpendicular bisector of euclid, but when we abstract, we try to "keep what we have learned". the goal is not to "make things needlessly complicated" but rather the opposite​: "to make sense out of a chaotic world".
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top