this question is helping me understand the various points of view on tensors better.
i.e. Some people think of tensors in a way analogous to the way they think of linear maps, as matrices, whereas i also think of tensors the way i think of linear maps, via the axioms they satisfy. I follow Emil Artin's advice in his book Geometric Algebra, never introduce matrices unless you need to compute something, like a determinant, then throw them out again afterwards.
But getting back to the question of "arrays" of higher dimension than 2, recall actually there are many ways to write a matrix, i.e. some people like me usually just write a single letter like M, or if they want to represent the entries, a general letter for the entries with subscripts, like {aij} (sorry about them subscripts).
Now there is no hindrance to writing more subscripts, like {aijk} and getting a representation for a three dimensional matrix, or array, so actually, unless you want to write one out physically in space, this is a pretty good way to write a 3 diml array.
Thus the method of writing letters with subscripts really is the matrix representation of a tensor. And it works as well for that, as writing n tuples of numbers works for writing vectors in dimensions higher than two i.e. if you think (a1,...,an) is a vector in n dimensions even though you do not draw it, then also you can think of {aijkl} as a 4 diml tensor even though it is hard to lay out fully in 4 diml space.
Now a calculator can surely be programmed to actually multiply these things just using the indices, without looking at them in a physical picture, hence there is not much difference in a 3 diml array or a 10 diml array, and a tensor of those types.
abstractly, tensor multiplication by "contracting indices" is just the higher dimensional analog of expressions like [ aij ] [xi] = [yj] for multiplying matrices. of course you have to have upper and lower indices to tell the rows from the columns.
so using indices for tensors, is like using matrices for linear maps. the conceptual way on the other hand uses instead the fact that for a linear map we have f(x+y) = f(x) + f(y), and for a tensor we have this in each variable separately.
Absolutely true, at times one needs to be able to calculate something, and that is the only appropriate time for writing indices, or matrices. When one is only reasoning about them, or even making a formal calculation that depends only on multilinearity, indices and matrices are superfluous and cumbersome.
who for example would use matrices to check that if a matrix times each of several vectors is zero, then the same is true for multiplying by their sum?