Several Dimensional Multiplication

In summary, the conversation discusses the concept of multiplying a matrix or object with multiple dimensions and how it relates to solving linear programs. The conversation also touches on the use of summations and the usefulness of abstract index notation. It is suggested to look for resources on tensor arithmetic, particularly abstract index notation, for a better understanding of the concept.
  • #1
ScaryManInThePa
3
0
I've been googling but I haven't found anything useful. I am trying to understand how multiplying a matrix (or object of numbers - whatever you want to call it, like a matrix is 2D, a vector is 1D) of several dimensions works? I am writing some models for solving linear programs and it has a lot of summations which look very sloppy. I am hoping to represent things more elegantly. So how does multiplication of several-dimensional objects?

Thanks
 
Physics news on Phys.org
  • #2
So are you talking about a matrix in 7 dimensions or a 7 dimensional array of numbers? That's a little confusing... by a 7D matrix I mean a 7*7 array of numbers. By a 7 dimensional array of numbers I mean a n*n*n*n*n*n*n array of numbers for some positive integer n.

A matrix is a linear object that eats a vector and returns a vector. So M.v = u can be written using indices as Mijvj = ui where the index j is summed over the dimension of the space, ie 1 to n, and the index i is free to be any value in the range 1 to n. See http://en.wikipedia.org/wiki/Einstein_summation_convention" )
Equivalently a matrix is a multilinear object that eats two vectors and returns a scalar: u.M.v = uiMijvj.

Note that linear means that M.(u + x*v) = M.u + x*M.v for x a scalar and u, v vectors. This is extended naturally to multilinearity.

Higher order http://en.wikipedia.org/wiki/Tensor" are linear objects that eat more vectors and return a scalar.
They can be written with more indices Tijk... and are the higher-dimensional arrays that I mentioned above.

(One thing I've not mentioned is the difference between contravariant and covariant indices, which is related to the vector space and its dual - but you probably don't need to worry about this.)

When working with explicit realisations of tensors and matrices, sums over the indices become almost unavoidable. These sums can be optimised and given notational conveniences like the dot product in various computer languages such as Matlab, Mathematica, numpy, etc...

And when working with them by hand, the Einstein (implicit) summation convention is very handy. As is various inner product and dot product notations.
 
Last edited by a moderator:
  • #3
I mean like a 7 dimensional array. So I was thinking a 7 dimensional "square" array (all dimensions same, so an n^7 array) multiplied by another 7 dimensional array of the same size (n^7) would give me a new 7 dimensional array. A 7 dimensional array by a 6 dimensional array might give my a new 6 dimensional array. A 7 dimensional array by a 1 dimensional array (vector) might give me a new vector.

I think Tensor is what I was thinking. Do you know a good text (for someone with limited math experience) that gives all the properties the arithmetic of these? Or what class in a college would teach this? I took an intro linear algebra class a while ago but we didn't learn about these.
 
  • #4
I don't know of a good, easy text. Maybe you can find something in the math.stackechange question http://math.stackexchange.com/questions/10282/an-introduction-to-tensors" . The wikipedia article is also not bad.

Physicists and mathematicians often talk about tensors in very different ways. Depending on what you're doing, the different approaches will be more or less useful. The best way to think of them is as multilinear maps - most conveniently written using abstract index notation, which is simply made concrete (with explicit summation and/or integration) when need be.

Depending on the metric in the space you're working in, you might need to worry about covariant and contravariant indices. These correspond to slots in the vector that eat vectors in the given vector space or its dual space. A nondegenerate metric gives an isomorphism between these spaces and if the basis chosen to be orthonormal, then the metric is proportional to the identity matrix and index position can be ignored.

As for multiplying a couple of 7-index tensors, there are multiple ways of contracting the indices. The suggestions you made are all possible...

Anyway, the wikipedia article http://en.wikipedia.org/wiki/Abstract_index_notation is pretty good, and googling for index notation primer turns up some decent results.
 
Last edited by a moderator:
  • #5
for your question. Multiplication of several-dimensional objects, also known as multidimensional multiplication, can be a complex concept to understand. Essentially, it involves multiplying values from each dimension of the objects together to get a final result. This can be represented using various mathematical notations, such as the dot product or tensor product.

In the context of linear programming, multidimensional multiplication can be used to represent relationships between variables in a more concise and elegant way. This can help simplify the equations and make them easier to solve.

One way to think about multidimensional multiplication is to imagine each dimension as a different "layer" of the object. When we multiply these layers together, we are essentially combining different aspects or components of the object to get a final result.

I would recommend looking into some specific examples and practicing with them to gain a better understanding of how multidimensional multiplication works. Additionally, there are many online resources and textbooks available that can provide more in-depth explanations and examples of this concept.

I hope this helps to clarify things for you and good luck with your linear programming models!
 

1. What is "Several Dimensional Multiplication"?

Several Dimensional Multiplication is a mathematical operation that involves multiplying numbers with multiple dimensions. This can include matrices, vectors, or tensors.

2. How is Several Dimensional Multiplication different from regular multiplication?

The main difference is that Several Dimensional Multiplication involves multiplying numbers with multiple dimensions, while regular multiplication only involves multiplying two numbers together. Several Dimensional Multiplication also follows different rules and properties than regular multiplication.

3. What are some common applications of Several Dimensional Multiplication in science?

Several Dimensional Multiplication is commonly used in fields such as physics, engineering, and computer science. It is used to solve complex equations, analyze data, and perform operations on multi-dimensional systems.

4. How do I perform Several Dimensional Multiplication?

The process for performing Several Dimensional Multiplication varies depending on the type of numbers being multiplied. For matrices, you can use the row-by-column method or matrix multiplication rules. For vectors, you can use the dot product or cross product. It is important to follow the specific rules and properties for each type of number in order to accurately perform Several Dimensional Multiplication.

5. What are some challenges associated with Several Dimensional Multiplication?

Several Dimensional Multiplication can be challenging for individuals who are not familiar with the rules and properties of each type of number being multiplied. It is also important to carefully track the dimensions of each number to ensure they are compatible for multiplication. Additionally, rounding errors and the complexity of computations can also be challenges when performing Several Dimensional Multiplication.

Similar threads

Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
6
Views
849
Replies
4
Views
2K
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
582
Replies
3
Views
242
  • Linear and Abstract Algebra
Replies
8
Views
855
  • Linear and Abstract Algebra
Replies
4
Views
936
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
485
Back
Top