Abstract algebra vector space problem

AI Thread Summary
The discussion revolves around a vector space defined by a basis and a bijection, focusing on using the inner product to find coefficients in R² for a vector in V. The user struggles with understanding the role of the inner product as an abstract concept and its application in forming a matrix equation. They initially attempt to apply the inner product to an orthogonal basis but seek guidance on how to generalize this approach for any basis. The conversation highlights the importance of correctly interpreting the inner products and assembling them into a matrix for solving the problem. Ultimately, the user realizes the need to shift their perspective from R² to the properties of the vector space itself.
Acce
Messages
2
Reaction score
0

Homework Statement


We have a vector space (V, R, +, *) (R being Real numbers, sorry I couldn't get latex work..) with basis V = span( v1,v2). We also have bijection f: R² -> V, such as f(x,y) = x*v1+y*v2.

Assume you have inner-product ( . , . ): V x V -> R. ( you can use it abstractly and assume the axioms hold ). Now we can assemble the inverse map of the given bijection f, such as f⁻¹: V -> R². With the help of inner-product, form a matrix equation to find the coefficient pair f⁻¹(v) in R² for any arbitrary v in V.

The Attempt at a Solution


I have tried to assemble the matrix, but my problem is, I don't understand how to use the inner-product as an abstract object. I don't see how it helps me to know that i get a real number from the inner-product, since I fail to see what that real number expresses here. As far as I have understood, the inner-product can be defined the way the modeler/mathematician as long as the axioms still hold. Could someone give me any pointers in this? Should I just read definition of inner-products more carefully?
...
edit: I figured out something, which seems to be close to it, but it apparently is satisfactory only if basis is an orthogonal basis.

I figured that since we have inner-product, we also have norm, so we get
A=[[norm(v1) 0];[0 norm(v2)]],
X=[x y],
B=v,
when equation is A*X=B. I'm now struggling to get it work for all basis. Could someone atleast tell me have I got it right so far?
 
Last edited:
Physics news on Phys.org
<v,v_1>=<xv_1+yv_2,v_1>=x<v_1,v_1>+y<v_2,v_1>
and
<v,v_2>=x<v_1,v_2>+y<v_2,v_2>

If you know all the inner products how would you solve for x and y?

You can't really do something like what you're trying, because v isn't a vector in R2, and if you want to convert it to one the only way that you have is to get f-1(v), which is what the problem is in the first place
 
Office_Shredder said:
<v,v_1>=<xv_1+yv_2,v_1>=x<v_1,v_1>+y<v_2,v_1>
and
<v,v_2>=x<v_1,v_2>+y<v_2,v_2>

If you know all the inner products how would you solve for x and y?

You can't really do something like what you're trying, because v isn't a vector in R2, and if you want to convert it to one the only way that you have is to get f-1(v), which is what the problem is in the first place

I see, I went wrong trying to think in R² too much..
As for the solution, apparently I just have to assemble those inner-products into a matrix now. Could you tell me how you came up with those inner-products? Was it just because there wasn't anything else known? Or was it a sophisticated guess based on experience? I still can't figure out, if those inner-products have somekind of meaning, or are they there just because they happen to solve this particular problem?
 
I picked up this problem from the Schaum's series book titled "College Mathematics" by Ayres/Schmidt. It is a solved problem in the book. But what surprised me was that the solution to this problem was given in one line without any explanation. I could, therefore, not understand how the given one-line solution was reached. The one-line solution in the book says: The equation is ##x \cos{\omega} +y \sin{\omega} - 5 = 0##, ##\omega## being the parameter. From my side, the only thing I could...
Back
Top