Several Dimensional Multiplication

  • Context: Undergrad 
  • Thread starter Thread starter ScaryManInThePa
  • Start date Start date
  • Tags Tags
    Multiplication
Click For Summary

Discussion Overview

The discussion revolves around the multiplication of multi-dimensional arrays, specifically focusing on the properties and operations involving tensors and higher-dimensional matrices. Participants explore the conceptual framework for understanding these mathematical objects, their applications in linear programming, and the notation used in their manipulation.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant seeks clarification on how to multiply multi-dimensional objects, particularly in the context of linear programming.
  • Another participant distinguishes between a 7-dimensional matrix (7x7 array) and a 7-dimensional array (n^7 array), explaining the implications for multiplication.
  • It is noted that a matrix operates on vectors to yield vectors, while higher-order tensors can operate on multiple vectors to return scalars.
  • Participants discuss the utility of the Einstein summation convention for simplifying expressions involving sums over indices.
  • One participant expresses interest in finding accessible texts on tensors and their arithmetic, indicating a limited background in mathematics.
  • Another participant mentions the differing perspectives of physicists and mathematicians on tensors and suggests that the best approach is to view them as multilinear maps.
  • There is mention of the necessity to consider covariant and contravariant indices depending on the metric of the space being worked in.
  • Participants acknowledge that there are multiple methods for contracting indices when multiplying tensors, indicating a variety of approaches to the problem.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and approaches to the topic, with no consensus reached on a singular method or text for learning about multi-dimensional multiplication and tensors. Multiple competing views on the nature and operations of tensors are present.

Contextual Notes

Limitations include the participants' differing levels of mathematical experience and the lack of a definitive resource for learning about tensors suitable for those with limited math backgrounds. The discussion also highlights the complexity of tensor operations and the need for clarity in notation.

ScaryManInThePa
Messages
3
Reaction score
0
I've been googling but I haven't found anything useful. I am trying to understand how multiplying a matrix (or object of numbers - whatever you want to call it, like a matrix is 2D, a vector is 1D) of several dimensions works? I am writing some models for solving linear programs and it has a lot of summations which look very sloppy. I am hoping to represent things more elegantly. So how does multiplication of several-dimensional objects?

Thanks
 
Physics news on Phys.org
So are you talking about a matrix in 7 dimensions or a 7 dimensional array of numbers? That's a little confusing... by a 7D matrix I mean a 7*7 array of numbers. By a 7 dimensional array of numbers I mean a n*n*n*n*n*n*n array of numbers for some positive integer n.

A matrix is a linear object that eats a vector and returns a vector. So M.v = u can be written using indices as Mijvj = ui where the index j is summed over the dimension of the space, ie 1 to n, and the index i is free to be any value in the range 1 to n. See http://en.wikipedia.org/wiki/Einstein_summation_convention" )
Equivalently a matrix is a multilinear object that eats two vectors and returns a scalar: u.M.v = uiMijvj.

Note that linear means that M.(u + x*v) = M.u + x*M.v for x a scalar and u, v vectors. This is extended naturally to multilinearity.

Higher order http://en.wikipedia.org/wiki/Tensor" are linear objects that eat more vectors and return a scalar.
They can be written with more indices Tijk... and are the higher-dimensional arrays that I mentioned above.

(One thing I've not mentioned is the difference between contravariant and covariant indices, which is related to the vector space and its dual - but you probably don't need to worry about this.)

When working with explicit realisations of tensors and matrices, sums over the indices become almost unavoidable. These sums can be optimised and given notational conveniences like the dot product in various computer languages such as Matlab, Mathematica, numpy, etc...

And when working with them by hand, the Einstein (implicit) summation convention is very handy. As is various inner product and dot product notations.
 
Last edited by a moderator:
I mean like a 7 dimensional array. So I was thinking a 7 dimensional "square" array (all dimensions same, so an n^7 array) multiplied by another 7 dimensional array of the same size (n^7) would give me a new 7 dimensional array. A 7 dimensional array by a 6 dimensional array might give my a new 6 dimensional array. A 7 dimensional array by a 1 dimensional array (vector) might give me a new vector.

I think Tensor is what I was thinking. Do you know a good text (for someone with limited math experience) that gives all the properties the arithmetic of these? Or what class in a college would teach this? I took an intro linear algebra class a while ago but we didn't learn about these.
 
I don't know of a good, easy text. Maybe you can find something in the math.stackechange question http://math.stackexchange.com/questions/10282/an-introduction-to-tensors" . The wikipedia article is also not bad.

Physicists and mathematicians often talk about tensors in very different ways. Depending on what you're doing, the different approaches will be more or less useful. The best way to think of them is as multilinear maps - most conveniently written using abstract index notation, which is simply made concrete (with explicit summation and/or integration) when need be.

Depending on the metric in the space you're working in, you might need to worry about covariant and contravariant indices. These correspond to slots in the vector that eat vectors in the given vector space or its dual space. A nondegenerate metric gives an isomorphism between these spaces and if the basis chosen to be orthonormal, then the metric is proportional to the identity matrix and index position can be ignored.

As for multiplying a couple of 7-index tensors, there are multiple ways of contracting the indices. The suggestions you made are all possible...

Anyway, the wikipedia article http://en.wikipedia.org/wiki/Abstract_index_notation is pretty good, and googling for index notation primer turns up some decent results.
 
Last edited by a moderator:

Similar threads

  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K