- #1

- 212

- 4

Let's say you have a matrix A and a matrix B (both 2 by 2 matrices). If I want to calculate the tensor A ⊗ B, then the answer is basically just a matrix of matrices. In other words, I do this:

The first matrix of the tensor product space is: the scalar multiplication of A

_{11}* B

The 2nd matrix of the tensor product space is: A

_{12}* B

The 3rd matrix is: A

_{21}* B

The last matrix is: A

_{22}* B

Over all, this makes a 4 by 4 matrix (which I will call C even though I know it should really be denoted A ⊗ B) with elements:

C

_{11}= A

_{11}* B

_{11}

C

_{12}= A

_{11}* B

_{12}

C

_{13}= A

_{12}* B

_{11}

C

_{14}= A

_{12}* B

_{12}

C

_{21}= A

_{11}* B

_{21}

C

_{22}= A

_{11}* B

_{22}

C

_{23}= A

_{12}* B

_{21}

C

_{24}= A

_{12}* B

_{22}

C

_{31}= A

_{21}* B

_{11}

C

_{32}= A

_{21}* B

_{12}

C

_{33}= A

_{22}* B

_{11}

C

_{34}= A

_{22}* B

_{12}

C

_{41}= A

_{21}* B

_{21}

C

_{42}= A

_{21}* B

_{22}

C

_{43}= A

_{22}* B

_{21}

C

_{44}= A

_{22}* B

_{22}

I just want to ask: Is this really all there is to taking a tensor product? Is this really the process or is this just some simplified special case? I just ask this because I have asked on threads before about tensor products and tried to look up videos and web pages on them, and every time my source has just made it out to be some daunting process that was so difficult to explain and just about impossible to show an example of.