If your teacher has told you something like "it's something that transforms under rotations according to the tensor transformation law", it's understandable that you're confused, because what does that statement even mean? It's been almost 20 years since someone tried to explain tensors to me that way, and it still really irritates me when I think about it.
I will explain one way to interpret that statement for the case of "covariant vectors" and "contravariant vectors". (I really hate those terms).
Consider a function T that takes
coordinate systems to ordered triples of real numbers. We can choose to denote the members of the triple corresponding to a coordinate system S by T(S)
i, or by T(S)
i. If S' is another coordinate system, that can be obtained from S by a rotation, then it might be useful to have formulas for T(S')
i in terms of the T(S)
i, and for T(S')
i in terms of the T(S)
i. Those formulas can be thought of as describing how (T(S)
1, T(S)
2, T(S)
3) and (T(S)
1, T(S)
2, T(S)
3) "transform" when the coordinate system is changed from S to S'. It's possible that neither of those formulas looks anything like the tensor transformation law, but it's also possible that one of them looks
exactly like the tensor transformation law.
If that's the case with the formula for T(S')
i, then some people call the triple (T(S)
1, T(S)
2, T(S)
3) a "contravariant vector". And if it's the case with the other formula, then some people call the triple (T(S)
1, T(S)
2, T(S)
3) a "covariant vector". I think that terminology is absolutely horrendous. The information about how the triple will "transform" isn't present in the triple itself. It's in the function T and the choice about where to put the indices. So if anything here should be called a "contravariant vector", it should be something like the pair (T,"upstairs"). I also don't like that this defines "vector" in a way that has nothing to do with the standard definition in mathematics: A vector is a member of a vector space.
If you understand this, then it's not too hard to understand how this generalizes to other tensors. It's just hard to explain it.
I consider this entire approach to tensors obsolete. A much better approach is to define tensors as multilinear functions that take tuples of tangent vectors and cotangent vectors to real numbers.
This post and the ones linked to at the end explain some of the basics. (If you read it, skip the first two paragraphs. You can also ignore everything that mentions the metric if you want to. What you need is an understanding of the terms "manifold", "tangent space", "cotangent space" and "dual basis").
Fredrik said:
Let V be a finite-dimensional vector space, and V* its dual space. A tensor of type (n,m) is a multilinear map T:\underbrace{V^*\times\cdots\times V^*}_{\text{$n$ factors}}\times\underbrace{V\times\cdots\times V}_{\text{$m$ factors}}\rightarrow\mathbb R. The components of the tensor in a basis \{e_i\} for V are the numbers T^{i_1\dots i_n}{}_{j_1\dots j_m}=T(e^{i_1},\dots,e^{i_n},e_{j_1},\dots,e_{j_m}), where the e^i are members of the dual basis of \{e_i\}. The multilinearity of T ensures that the components will change in a certain way when you change the basis. The rule that describes that change is called "the tensor transformation law".