Why do we need infinite dimensional vector spaces?

Ratzinger
Messages
291
Reaction score
0
We have x=x1(1,0,0) + x2(0,1,0) + x3(0,0,1) to represent R^3. That's a finite dimensional vector space. So what do we need infinite dimensional vector space for? Why do we need (1,0,0,...), (0,1,0,0,...), etc. bases vectors to represent R^1 ?
 
Mathematics news on Phys.org
Ratzinger said:
We have x=x1(1,0,0) + x2(0,1,0) + x3(0,0,1) to represent R^3. That's a finite dimensional vector space. So what do we need infinite dimensional vector space for? Why do we need (1,0,0,...), (0,1,0,0,...), etc. bases vectors to represent R^1 ?
To represent R^1, you only need 1 vector. A vector space of dimension n is spanned by a basis of n vectors, just as in your example of R^3. This is because a basis needs to span the vector space (which means you need *at least* n vectors) and has to be linearly independant (which means you can only have *at most* n vectors) which makes the number of vectors in the basis exactly n.

Then, what do we need vector spaces with infinite dimension? Consider the vectorspace \mathbb{R}\left[ X \right] which is the vector space of all polynomials in x over R. This is trivially an infinite dimensional vector space since a finite number of vectors in a basis contains a vector with a maximum degree r, meaning that x^(r+1) and higher cannot be formed.
 
"Functional Analysis" makes intensive use of "function spaces"- infinite dimensional vector spaces of functions satisfying certain conditions. TD gave a simple example- the space of all polynomials. Perhaps the most important is L2(X), the vector space of all functions whose squares are Lebesque integrable on set X.
 
Another familiar example of an infinite dimensional vector space is functions from an infinite domain to a ring
Consider that the space of functions
f:A \rightarrow R
from some set A to a ring R
is a vector space with dimensions indexed on A
since we have a vector
f(a)=r
or
f_a=r
Scalar multiplication, and vector addition are performed using the ring.
 
Last edited:
I think you should reconsider that example, NateTG. How is *a* function a vector space? Over what field? And what are its elements
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Back
Top