Solving Algebra Problems: Bilinear Multiplication & Supercommutativity

  • Thread starter latentcorpse
  • Start date
  • Tags
    Algebra
I'm not sure. I think the point is, if you have a set of n vectors all taken from the same vector space V, then you can form the nth exterior power of the vector space, which is like a "the set of all possible combinations" of those n vectors. It's a form of tensor, because if you have 2 vectors v1 = <v1x, v1y, v1z> and v2 = <v2x, v2y, v2z>, the wedge product of these two vectors is v1^v2 = <(v1x)(v2y)-(v1y)(v2x), (v1x)(v2z)-(v1z)(
  • #1
latentcorpse
1,444
0
ok. today we discussed the following:

The wedge product defines a bilinear multiplication in [itex]\Lambda(V^{*})[/itex] which is associateive i.e. [itex](\alpha \wedge \beta) \wedge \gamma = \alpha \wedge (\beta \wedge \gamma) \forall \alpha,\beta,\gamma \in \Lambda(V^{*})[/itex] and which is supercommutative i.e. if [itex]\alpha^{k} \in \Lambda^{k}(V^{*}),\beta^{l} \in \Lambda^{l}(V^{*})[/itex] then [itex]\alpha^{k} \wedge \beta^{l} = (-1)^{kl} \beta^{l} \wedge \alpha^{k}[/itex]

Question 1: What is a bilinear multiplication?
Question 2: Is my definition of supercommutative ok? I wasn't sure if it was [itex](-1)^{kl}[/itex] or [itex](-1)^{k+l}[/itex]? What does supercommutative mean?

Then as an example of this, we considered the following:

[itex]\alpha=e_1^* \wedge e_2^* - 3 e_2^* \wedge e_4^* = e_{12}^* -3 e_{24}^* \in \Lambda^{2}(V^{*}), \beta=3 e_1^* + 4e_2^*[/itex]

Then,

[itex]\alpha \wedge \beta=6 e_{12}^* \wedge e_1^* +8 e_{12}^* \wedge e_2^* - 9 e_{24}^* \wedge e_1^* - 12 e_{24}^* \wedge e_2^*[/itex]

but we have[itex]e_{ij}^* \wedge e_j^* = e_i^* \wedge (e_j^* \wedge e_j^*)=0[/itex] so 3 terms drop out and we're left with

[itex]\alpha \wedge \beta =- 9 e_{24}^* \wedge e_1^* = -9 e_2^* \wedge e_4^* \wedge e_1^8=\mathbf{- (-1)^{2} e_1^* \wedge e_2^* \wedge e_4^*=-e_{124}^*}[/itex]

Question 3: Is [itex]\beta \in \Lambda^{1}(V^{*})[/itex]?
Question 4: I don't understand how he got the part in bold. I have a feeling he just forgot to write in the 9 but the rearranging of that step uses the supercommutative algebra described above and I don't follow the logic there?

Any help would be greatly appreciated

cheers
 
Physics news on Phys.org
  • #2
I don't really know much about this topic, but since nobody has replied yet I will try to help you with my general knowledge :smile:

latentcorpse said:
Question 1: What is a bilinear multiplication?
If you consider for example a vector space V, then a bilinear function on V is a function f on V x V which is linear in both its arguments. So if v, w, u are elements of V and x is a number, then
f(a v + w, u) = a f(v, u) + f(w, u)
f(v, a w + u) = a f(v, w) + f(v, u)

You can see the wedge product as a function of two arguments:
[tex]f(v, w) = v \wedge w[/tex]
then bilinearity simply means that for example
[tex]f(a v + w, u) = (a v + w) \wedge u = a (v \wedge u) + w \wedge u [/tex]

latentcorpse said:
Question 2: Is my definition of supercommutative ok? I wasn't sure if it was [itex](-1)^{kl}[/itex] or [itex](-1)^{k+l}[/itex]? What does supercommutative mean?
I think so. Supercommutative probably either refers to the fact that you are commuting a k- and an l-form, or to the fact that it isn't precisely a commutation but some generalization where an extra sign may occur. Since I recall something like
[tex]v \wedge w = - w \wedge v [/tex]
when v and w are one-forms, I think it should be (-1)^(k l). That is: if both arguments are even or odd in grade you can switch them, if one is even and the other odd you get a minus sign.

latentcorpse said:
Question 3: Is [itex]\beta \in \Lambda^{1}(V^{*})[/itex]?
Yes. You can see it by the fact that it is expressed in the basis [itex]\{ e_1^*, e_2^* \}[/itex]. For a two-form you would need a basis like [itex]\{ e_{ij} = e_i \wedge e_j \mid i < j \}[/itex] (I am dropping the asterisks, too much work to put them in), for an n-form in general [itex]\{ e_{i_1 \cdots i_n} = e_{i_1} \wedge \cdots \wedge e_{i_n} \mid i_1 < \cdots < i_n \}[/itex]. (Note that this implies that if V is k-dimensional, there are no n-forms for [itex]n > k[/itex].

latentcorpse said:
Question 4: I don't understand how he got the part in bold. I have a feeling he just forgot to write in the 9 but the rearranging of that step uses the supercommutative algebra described above and I don't follow the logic there?
Yes, it looks like he just forgot the 9. He commutes the e1 to the front, to have all the elements in a logical order.
 
  • #3
thanks a lot. quick questions though.

(i)if you agree with me that [itex]\alpha \in \Lambda^2(V^*),\beta \in \Lambda^1(V^*)[/itex], then why don't we write them as [itex]\alpha^2[/itex] and [itex]\beta^1[/itex] since at the start of the definition we were considering an [itex]\alpha^k \in \Lambda^k(V^*)[/itex] etc?

(ii)If we are calling them [itex]\alpha^k[/itex] just because it's in that set, how do we distinguish it from being [itex]\alpha \cdot ... \cdot \alpha[/itex] k times?

(iii) What is [itex]\Lambda^{n}(V^*)[/itex] - is it a set?

thanks for your help.
 
  • #4
latentcorpse said:
thanks a lot. quick questions though.

(i)if you agree with me that [itex]\alpha \in \Lambda^2(V^*),\beta \in \Lambda^1(V^*)[/itex], then why don't we write them as [itex]\alpha^2[/itex] and [itex]\beta^1[/itex] since at the start of the definition we were considering an [itex]\alpha^k \in \Lambda^k(V^*)[/itex] etc?

(ii)If we are calling them [itex]\alpha^k[/itex] just because it's in that set, how do we distinguish it from being [itex]\alpha \cdot ... \cdot \alpha[/itex] k times?
I have a tiny bit of experience here as well, from a very basic study of exterior calculus in diffy geo. The only reason we used [itex]\alpha^k[/itex] in the previous scenario was to illustrate supercommutativity. It does not represent the wedge product of the same vector with itself k times. From the definition of the wedge product, [itex]\alpha^k = \alpha\wedge\dots\wedge\alpha[/itex] for any k > 1 is the 0 vector.

latentcorpse said:
(iii) What is [itex]\Lambda^{n}(V^*)[/itex] - is it a set?

thanks for your help.

The nth exterior power of a vector space is the vector space of all n-vectors, rigorously, elements of Vk = [itex]V\times\dots\times V[/itex], quotiented by a subspace defined by the wedge product. That is to say, each vector in [itex]\Lambda^{2}V[/itex] is an ordered pair of vectors in V, quotiented by the subspace made up of vectors of the form {(u+v, w) - [(u,w) + (v,w)], (u, v+w) - [(u,v) + (u,w)], a(u, v) - (au, v), (u, bv) - b(u, v), (u, u)}. As an example, if you regard R (the set of real numbers) as a vector space, you can regard the vector space [itex]\Lambda^{3}R[/itex] as a certain quotient space of R3, albeit with a strange wedge product structure and having all vectors with any identical components identified with the 0 vector. However, this construction is more useful in the development of tensors and tensor calculus, where we do operations on many vectors per tensor.
 
Last edited:
  • #5
ok cheers. why is that vector space quotiented by that subspace though?
 

FAQ: Solving Algebra Problems: Bilinear Multiplication & Supercommutativity

1. What is bilinear multiplication in algebra?

Bilinear multiplication in algebra is a type of operation that involves multiplying two elements in a vector space. It is a generalization of the concept of multiplication in numbers, where the result of the operation is a scalar value. In bilinear multiplication, the result is another vector or matrix.

2. What is supercommutativity in algebra?

Supercommutativity is a property of certain algebraic structures where the order of operations does not affect the outcome. In other words, if two elements are multiplied in a supercommutative structure, the result will be the same regardless of the order in which the multiplication is performed.

3. How do you solve algebra problems involving bilinear multiplication?

To solve algebra problems involving bilinear multiplication, you need to first understand the properties of bilinear multiplication and how it operates in the given vector space. Then, you can use algebraic techniques such as factoring, substitution, and simplification to solve the problem.

4. Can you provide an example of a problem involving bilinear multiplication?

Sure, here's an example: Given the vectors a = [1, 2, 3] and b = [4, 5, 6], find the bilinear multiplication of a and b. To solve this, we use the formula ab = (∑i ai bi), where i represents the index of the elements. Plugging in the values, we get: ab = (1*4, 2*5, 3*6) = (4, 10, 18). Therefore, the bilinear multiplication of a and b is [4, 10, 18].

5. How is supercommutativity useful in solving algebra problems?

Supercommutativity is useful in solving algebra problems because it allows us to manipulate equations and rearrange terms without changing the outcome. This can make solving complex problems easier and more efficient. It is also a fundamental property in certain areas of mathematics, such as abstract algebra and linear algebra.

Similar threads

Back
Top