the word tensor has two meanings, it is both a verb and a noun. it is a sort of product that can be performed between two vector spaces, and then it is an element of such a product.
when you tensor multiply a vector space V by the dual W* of another vector space W, the result maps naturally to the space of linear transformations from W to V.
when the spaces are finite dimensional, this map is an isomorphism. one also knows that this space of linear transformations is isomorphic, non naturally, to the space of matrices of size dim(V) by dim(W).
i myself am not too up on this rank language, but my impression from reading other people posts is that when the space V is always the same, say R^n, then one multiplies together exclusively copies of V and V*, and the rank refers to the number of copies of each one.
thus i would have thought such users would have called a tensor with one copy of each, i.e. a matrix, a "rank (1,1) tensor", whereas rank 2 would have meant a product of two copies of V.
of course for people who routinely choose bases, the distinction between V and V* is much less clear, hence one cannot distinguish between rank(1,1) and rank 2.
but that is not my area.