Understanding the metric tensor

In summary, the conversation discusses the definitions of Riemannian and pseudo-Riemannian metric tensors, as well as the transformation matrices used to define them. There is confusion over the use of the symbol J and the term Jacobian matrix, as well as the terms direct/indirect transition matrix and basis/component (coordinate) transformation matrix. The conversation also explores the transformation from Cartesian to plane polar coordinates, and discusses the correct notation for defining the components of a Riemannian metric tensor. It is suggested that there may be two different definitions of metric tensors, depending on the type of space, and the question is raised of whether the existence of a metric implies the existence of a metric tensor.
  • #1
Rasalhague
1,387
2
I've seen a Riemannian metric tensor defined in terms of a matrix of its components thus:

[tex]g_{ij} = \left [ J^T J \right ]_{ij}[/tex]

and a pseudo-Riemannian metric tensor:

[tex]g_{\alpha \beta} = \left [ J^T \eta J \right ]_{\alpha \beta}[/tex]

where J is a transformation matrix of some kind. I've been trying to figure out which transformation matrix is meant here, as the symbol J and the term Jacobian matrix are used differently by different textbooks. Some use it for a matrix that transforms basis vectors, others for the inverse of this, which transforms components/coordinates. As I recall, in A Quick Introduction to Tensor Analysis, Shapirov uses the terms direct transition matrix S and indirect transition matrix T = S-1. But as these names are rather opaque, I'll call them the basis transformation matrix B and the component (coordinate) transformation matrix C = B-1.

I think that in the above definitions, J = B (rather than C), giving us, for the component matrix of a Riemannian metric tensor:

[tex]g_{ij} = \left [ B^T B \right ]_{ij}.[/tex]

The next question is, which of the following matrices is meant: B1 or B2?

[tex]B_1 \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix} = \begin{bmatrix} ... \\ \vec{\textbf{e}}_i \\ ...
\end{bmatrix}' \qquad \textup{or} \qquad \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix} B_2 = \begin{bmatrix} ... & \vec{\textbf{e}}_i & ... \end{bmatrix}'[/tex]

I experimented with the transformation from Cartesian to plane polar coordinates, and concluded that B = B1, and BT = B2, since

[tex]\begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \vec{\textbf{e}_x} \\ \vec{\textbf{e}_y} \end{bmatrix} = \begin{bmatrix} \vec{\textbf{e}}_r \\ \vec{\textbf{e}}_{\theta} \end{bmatrix}[/tex]

and if we call this matrix B, then

[tex]B^T B = \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix} = g[/tex]

and then

[tex]\textup{d}s^2 = g_{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^2 \, \textup{d}\theta^2[/tex]

which is the formula given for a line element in plane polar coordinates. I'm using yi to represent the coordinates of the current (new) system, and xi for coordinates of the previous (old) system. So if I've got this right,

[tex]B = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^2}{\partial y^1} \\ \frac{\partial x^1}{\partial y^2} & \frac{\partial x^2}{\partial y^2} \end{bmatrix} \qquad \textup{and} \qquad B^T = \begin{bmatrix} \frac{\partial x^1}{\partial y^1} & \frac{\partial x^1}{\partial y^2} \\ \frac{\partial x^2}{\partial y^1} & \frac{\partial x^2}{\partial y^2} \end{bmatrix}.[/tex]

With any luck, that gives:

[tex]\left [ B^T B \right ]_{ij} = B_{ki} B_{kj} = g_{ij}[/tex]

summing over the indices on the y's.

But when I write this with index notation, I get

[tex]\frac{\partial x^i}{\partial y^k} \frac{\partial x^j}{\partial y^k} = g_{ij}[/tex]

which breaks the rules for where to write indices, since the summed over indices are on the same level as each other, and the free indices one the left of the equality are on a different level to the free indices on the right. Have I made a mistake somewhere?


And how can I reconcile this with equation (10) of this article at Wolfram Mathworld, defining the pseudo-Riemannian metric tensor?

[tex]g_{\mu\nu} = \frac{\partial \xi^{\alpha}}{\partial x^{\mu}} \frac{\partial \xi^{\beta}}{\partial x^{\nu}} \eta_{\alpha\beta}[/tex]

http://mathworld.wolfram.com/MetricTensor.html

Supposing they're using x's as in eq. (1) to denote the coordinates of the current system, they seem to have the partial differentials the opposite way round to me. How would the equivalent definition for the components of a Riemannian metric tensor be written correctly? Is it possible (desirable? conventional?) to define the metric tensor's component matrix in terms of the component transformation matrix C?

I wondered if I might have inadvertantly defined the inverse metric tensor, but that doesn't seem to work, since the inverse matrix of what I defined as g doesn't correctly define the line element:

[tex]\sum_{i=1}^{2} \sum_{j=1}^{2} g^{ij} \, \textup{d}y^i \textup{d}y^j = \textup{d}r^2 + r^{-2} \, \textup{d}\theta^2.[/tex]
 
Last edited:
Physics news on Phys.org
  • #2
I don't think it can be the coordinate (component) transformation matrix, B-1, or its transpose, that's meant by J in the formula above for the components of the metric tensor, g = JT J, because, in the case of the transformation from Cartesian to plan polar coordinates,

[tex]B^{-1} = \begin{bmatrix}\cos \theta & \frac{-\sin \theta}{r}\\ \sin \theta & \frac{\cos \theta}{r}\end{bmatrix}[/tex]

and

[tex]\left ( B^{-1} \right )^T \, B^{-1} = \begin{bmatrix}\cos \theta & \sin \theta\\ \frac{-\sin \theta}{r} & \frac{\cos \theta}{r}\end{bmatrix} \begin{bmatrix}\cos \theta & \frac{-\sin \theta}{r}\\ \sin \theta & \frac{\cos \theta}{r}\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \neq \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix},[/tex]

while

[tex]B^{-1} \, \left ( B^{-1} \right )^T = \begin{bmatrix}\cos \theta & \frac{-\sin \theta}{r}\\ \sin \theta & \frac{\cos \theta}{r}\end{bmatrix} \begin{bmatrix}\cos \theta & \sin \theta\\ \frac{-\sin \theta}{r} & \frac{\cos \theta}{r}\end{bmatrix}[/tex]

[tex]= \frac{1}{r} \begin{bmatrix} r \sin^2 \theta + \cos^2 \theta & \cos \theta \sin \theta \left ( r-1 \right ) \\ \cos \theta \sin \theta \left ( r-1 \right ) & r \sin^2 \theta + \cos^2 \theta \end{bmatrix}.[/tex]

But if it's B, the basis transformation that's meant, then--as far as I can see--it's the coordinates of the current (new) system that are in the denominators of the partial derivatives, and the indices are on the wrong levels for a well formed tensorial index-notation equation.
 
  • #3
So are there two different definitions of metric tensor, depending on the type of space, with the metric tensor for Euclidean space transforming like so:

[tex]g = B^T \, B[/tex]

[tex]g_{ab} = \frac{\partial x^a}{\partial x^m'} \frac{\partial x^b}{\partial x^n'} \, \delta_{m' n'}[/tex]

(summing over indices on the same level, or rewriting it in some way so as to keep up the proper index rules) while the metric tensor for Minkowski space transforms thus:

[tex]g = (B^{-1})^T \eta B^{-1}[/tex]

[tex]g_{\alpha \beta} = \frac{\partial x^\mu'}{\partial x^\alpha} \frac{\partial x^\nu'}{\partial x^\beta} \eta_{\mu \nu}[/tex]

or there some more general definition of which these are only special cases?

Does the existence of a metric imply the existence of a metric tensor? What if the transformation isn't linear?
 
  • #4
Rasalhague said:
and if we call this matrix B, then

[tex]B^T B = \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta
\end{bmatrix} \begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix} = g[/tex]

Hmm, I see I got these the wrong way round. That should be

[tex]\begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix} \begin{bmatrix} \cos \theta & -r \sin \theta \\ \sin \theta & r \cos \theta
\end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & r^2 \end{bmatrix}[/tex]
 
  • #5
I haven't really examined your posts, so I'll just say that these definitions (or whatever they are) seem pretty awkward and non-standard. I'd start with the standard definitions of "tensor" and "tensor field":
Fredrik said:
Tensors are much more difficult to understand than vectors, because to really understand them you need to understand manifolds. One thing you need to know is that there's a vector space associated with each point p in a manifold M. This vector space is called the tangent space at p. It's usually written as TpM, but I'll just write V here.

If V is a vector space over the real numbers, you can define the (algebraic) dual space V* as the set of all linear functions from V into the real numbers. When V is the tangent space at p, V* is called the cotangent space at p. The members of V are called "tangent vectors" or just "vectors", and the members of V* are called "cotangent vectors" or just "covectors".

A tensor (at the point p) of type (n,m) is a multilinear (linear in all variables) function

[tex]T:\underbrace{V^*\times\cdots\times V^*}_{\mbox{n factors}}\times\underbrace{V\times\cdots\times V}_{\mbox{m factors}}\rightarrow\mathbb R[/tex]

A tensor field can be defined as a function that assigns a tensor at p to each point p. Vector fields are a special case of this. They are tensor fields of type (1,0). Note that a tensor of type (1,0) is a member of the dual space of V*, which I'll write as V**. To see why this assigns a tangent vector to each point in M, you need to understand that V** is isomorphic to V.

Recall that members of V** take members of V* to real numbers. We want to associate a v** in V** with each v in V, and to do that we must specify how v** acts on an arbitrary u* in V*:

v**(u*)=u*(v)

The map v [itex]\mapsto[/itex] v** is an isomorphism from V into V**. So each tensor field of type (1,0) defines a tangent vector at each point in M, via this isomorphism.
Then you define the metric, as a symmetric non-degenerate tensor field of type (0,2). Symmetric means g(u,v)=g(v,u).

The components in a coordinate system x, at a point p in the manifold, are defined here. See also post #5 in that thread.

Rasalhague said:
[tex]g_{\alpha \beta} = \frac{\partial x^\mu'}{\partial x^\alpha} \frac{\partial x^\nu'}{\partial x^\beta} \eta_{\mu \nu}[/tex]
This is only valid if you're transforming from a coordinate system in which the components are the same as the components of the matrix [itex]\eta[/itex], to some other coordinate system. For the general transformation rule (which you should be able to find using what I said in the thread I linked to), you should replace the [itex]\eta[/tex] on the right with g.
 
Last edited:
  • #6
Fredrik said:
I haven't really examined your posts, so I'll just say that these definitions (or whatever they are) seem pretty awkward and non-standard.

Thanks, Fredrik. It occurred to me that there were four matrices related to a change of coordinates, at least two of which can go by the name of "Jacobian matrix", depending on the textbook. Here's my latest attempt at a notation to keep track of which is which. I think I'm still getting them mixed up somewhere, but at least having names for them may help me spot where I'm going wrong.

For example, in the Wikipedia article Metric tensor, I think the matrix they call the Jacobian matrix is the one that, when multiplied on the right of a 1xn matrix (a row) whose elements are the old basis vectors, gives a 1xn matrix consisting of the new basis vectors. So I'm writing it as B (for basis transformation matrix, as opposed to component transformation matrix), with a subscript R for "right":

[tex]g' = \left ( B_R \right )^T g \left ( B_R \right )[/tex]

This works for the example of Cartesian to plane polar coordinates that I was experimenting with. But I don't see how they get

[tex]\textup{d}x = B_R \; \mathup{d}x'[/tex]

(where dx and dx' are column vectors). I get

[tex]\textup{d}x = \left ( B_R \right)^T \; \mathup{d}x' = \left ( B_L \right) \; \mathup{d}x' = \left ( C_L \right)^{-1} \; \mathup{d}x'[/tex]

with [itex]C_L[/itex] standing for the transformation matrix for transforming a column of components, and [itex]B_L[/itex] its inverse, the matrix that transforms a column of the old basis vectors into the new basis vectors (also written as a column).

Fredrik said:
I'd start with the standard definitions of "tensor" and "tensor field" [...]

Ah, good. The basic definitions are starting to get familiar now! Just one question about the bit you quoted: is this definition not quite general, given that, for example

[tex]V^* \times V \neq V \times V^*[/tex]

or, in abstract index notation,

[tex]T^i \; _j \neq T_p \; ^q = T^i\;_j \; g_{ip} \; g^{jq} \; ?[/tex]

Fredrik said:
The components in a coordinate system x, at a point p in the manifold, are defined here. See also post #5 in that thread.

I've been told that the [itex]\mathbb{R}^n[/itex] in the chart [itex]x : U \to \mathbb{R}^n[/itex] need only be a topological space (the Cartesian product of n copies of the reals, supplied with some topology). Is this how it's thought of in the case of a Riemannian manifold, with additional structure being thought of as residing elsewhere in the structure of the manifold, or is [itex]\mathbb{R}^n[/itex] then thought of as a vector space, inner product space or Euclidean point space? In general relativity, does [itex]\mathbb{R}^n[/itex] play the same role as in a Riemannian manifold, or would [itex]\mathbb{R}^n[/itex] in these definitions be replaced by Minkowski space (conceived of as a vector space, inner product space, affine space)? If that makes any sense...

Fredrik said:
This is only valid if you're transforming from a coordinate system in which the components are the same as the components of the matrix [itex]\eta[/itex], to some other coordinate system. For the general transformation rule (which you should be able to find using what I said in the thread I linked to), you should replace the [itex]\eta[/tex] on the right with g.

Okay.
 
  • #7
I still haven't examined what you're saying about those matrices in detail, but I see something I recognize from a task I assigned myself a few years ago to get some practice. The task was to show that if [itex]\phi:\mathbb R^4\rightarrow\mathbb R^4[/itex] is an isometry of the Minkowski metric, we must have [itex]\phi(x)=\Lambda x+a[/itex], where [itex]\Lambda[/itex] is a linear operator that satisfies [itex]\Lambda^T\eta\Lambda=\eta[/itex]. (I assume you know what [itex]\eta[/itex] is). The assumption that [itex]\phi[/itex] is an isometry means that [itex](\phi^*g)_x(u,v)=g_x(u,v)[/itex], for all x, where [itex]\phi^*[/itex] is the pullback function corresponding to [itex]\phi[/itex].

What I found was that [itex]{\phi^\mu},_\rho(x) \eta_{\mu\nu} {\phi^\nu},_\sigma(x)=\eta_{\rho\sigma} [/itex], which can be interpreted as the component form of the matrix equation [itex]J_\phi(x)^T \eta J_\phi(x) = \eta[/itex], where [itex]J_\phi(x)[/itex] is the Jacobian matrix of [itex]\phi[/itex] at x. This implies that [itex]\phi[/itex] must be a first-order polynomial, [itex]\phi(x)=\Lambda x+a[/itex], and the Jacobian matrix of this function is [itex]\Lambda[/itex] (for any x). This looks like the sort of result you're talking about. Maybe things would clear up for you if you tried to do this proof yourself.

You're right that e.g. [itex]T^{ab}{}_c=T^a{}_b{}^c[/itex], but the vector space of tensors of the type (2,1) is obviously isomorphic to the vector space of tensors of type "(1,1,1)" (or whatever I should call them). That's probably why most texts don't bother to define them properly. That, and the fact that the notation in the definition would be long and awkward if we try to cover all cases.

I think you're right that the [itex]\mathbb R^n[/itex] in [itex]x:U\rightarrow\mathbb R^n[/itex] only needs to be equipped with the standard topology (the condition that must be met is that the manifold is locally homeomorphic to [itex]\mathbb R^n[/itex]), but it can be convenient to give it a vector space structure as well. For example, one of many definitions of the tangent space at a point in the manifold defines the tangent vectors as equivalence classes of curves through that point. To give that set a vector space structure, we have to define the meaning of linear combinations of equivalence classes of curves. In particular we have to define the sum of two classes. We can do this by mapping the curves to curves in [itex]\mathbb R^n[/itex], then add those curves (which is only possible if we have assumed a vector space structure), and then map the result back to the manifold. If you're interested in this definition, see Isham's book on differential geometry for physicists.
 
  • #8
Thanks for the hints. Yes, I think I know what eta is (a matrix representing the components, in a Lorentz frame, of the metric tensor for flat spacetime), but I'm going to have to look up "pullback function" and the comma notation, which I think means some kind of a derivative - perhaps a partial derivative wrt the rho'th, etc., coordinate curve?

I appreciate the notational difficulty of the more general tensor definition. Here's an attempt I made a while ago, using product signs for Cartesian products (I wonder of there's a more standard notation), K being the underlying field, and the curly brace denoting two options (ending with V or with V*).

[tex]T:\left( \prod_{i=1}^{m_{1}} V_{i}^{*} \right) \times \left (\prod_{i=1}^{n_{1}} V_{i} \right ) \times \ldots \times \left\{ \begin{array}{ll} \left ( \prod_{i=1}^{m_{r}} V_{i}^{*} \right ) \times \left (\prod_{i=1}^{n_{s}} V_{i} \right ) \to \mathbb{K} \\ \left (\prod_{i=1}^{n_{s}} V_{i} \right ) \times \left ( \prod_{i=1}^{m_{r}} V_{i}^{*} \right ) \to \mathbb{K}\end{array}[/tex]
 
  • #9
If [itex]\phi:M\rightarrow N[/itex] is a [itex]C^\infty[/itex] function from a manifold M into a manifold N, then for each scalar field f on N, there's a scalar field [itex]f\circ\phi[/itex] on M. We often write [itex]\phi^*f=f\circ\phi[/itex], and call [itex]\phi^*f[/itex] the pullback of f to M, but this isn't the pullback I'm talking about.

We can also use [itex]\phi[/itex] to define a function [itex]\phi_*:T_pM\rightarrow T_{\phi_(p)}N[/itex] by

[tex]\phi_*v(f)=v(\phi^*f)=v(f\circ\phi)[/tex]

This is a pushforward of v to a tangent space of N. Now we can define a pullback for cotangent vectors [itex]\phi^*:T_{\phi(p)}N\rightarrow T_pM[/itex] by

[tex]\phi^*\omega(v)=\omega(\phi_*v)[/tex]

We can generalize this to tensor fields in a pretty obvious way. For example, if [itex]\chi[/itex] is a covector field (tensor field of type (0,1)), we define [itex](\phi^*\chi)_p=\phi^*\chi_{\phi(p)}[/itex]. Since [itex]\chi_{\phi(p)}[/itex] is a covector at [itex]\phi(p)[/itex], the right-hand side is defined by the previous definition.

Now you should be able to see the obvious generalization to tensor fields of type (0,2) (like the metric). That's the pullback I'm talking about. I really recommend that you go through this over and over until you find it as familiar as adding integers. This stuff is used a lot. These are a couple of good exercises:

Edit: Excercise 1 deleted, because I asked you to prove something that isn't true. I don't have time right now to think about what I should have asked you instead, but I might edit this again later.

2. Suppose [itex]\phi:\mathbb R^n\rightarrow\mathbb R^n[/itex], and that v is a tangent vector at x. Let the I be the identity map on [itex]\mathbb R^n[/itex]. (Note that this function satisfies the definition of a coordinate system). Show that the components of [itex]\phi_*v[/itex] in I are the same as the components of the matrix [itex]J_\phi (x)v'[/itex], where [itex]J_\phi(x)[/itex] is the Jacobian matrix of [itex]\phi[/itex] at x, and v' is the matrix of components of v in I.

Yes, the comma notation is just a partial derivative. For example, if f(t,x)=t+x2, we have [itex]f_{,1}(t,x)=2x[/itex]. (Note that in relativity the numbers go from 0 to n-1, where n is the number of variables, not from 1 to n).
 
Last edited:
  • #10
Fredrik said:
Suppose [itex]\phi:\mathbb R^n\rightarrow\mathbb R^n[/itex], and that v is a tangent vector at x. Let the I be the identity map on [itex]\mathbb R^n[/itex]. (Note that this function satisfies the definition of a coordinate system). Show that the components of [itex]\phi_*v[/itex] in I are the same as the components of the matrix [itex]J_\phi (x)v'[/itex], where [itex]J_\phi(x)[/itex] is the Jacobian matrix of [itex]\phi[/itex] at x, and v' is the matrix of components of v in I.

Let M stand for the domain of [itex]\phi[/itex], and N the codomain. I'll call the (in this case global) coordinates of M with the identity map [itex]m^i[/itex], and the (in this case global) coordinates of N with the identity map [itex]n^i[/itex]. The components of v, in the tangent space of some point in M, with the identity map, will be some linear combination of basis tangent vectors, which have been defined as partial derivative operators, thus:

[tex]v = v^i \frac{\partial }{\partial m^i}.[/tex]

In this case, the Jacobian for components is the matrix of partial derivatives:

[tex]\left [ \frac{\partial n^j}{\partial m^i} \right ]_{ji}.[/tex]

So

[tex]J v' = J v = \frac{\partial n^j}{\partial m^i} v^i \frac{\partial \left f}{\partial n^j}.[/tex]

And

[tex]\phi_* v\left ( f \right ) = v\left ( f \circ \phi \right ),[/tex]

which, in component form, becomes

[tex]v^i \frac{\partial }{\partial m^i} \left ( f \circ \phi \right ) = v^i \frac{\partial f}{\partial n^j} \frac{\partial n^j}{\partial m^i} = Jv'\left ( f \right )[/tex]

by the chain rule.
 
  • #11
Here's how I do it:

[tex](\phi_*v)^\mu=(\phi_*v)(n^\mu)=v(n^\mu\circ\phi)=v(\phi^\mu)=v^\nu\frac{\partial}{\partial m_\nu}\bigg|_x(\phi^\mu)[/tex]

[tex]=v^\nu(\phi^\mu\circ m^{-1})_{,\nu}(m(x))=v^\nu\phi^\mu{}_{,\nu}(x)=(J_\phi(x)v')^\mu[/tex]

Note that n=m, so the Jacobian you're considering is just the identity matrix.
 
Last edited:

1. What is the metric tensor and why is it important in science?

The metric tensor is a mathematical tool used in differential geometry to describe the geometric properties of a space. It is important in science because it allows us to measure distances and angles in a space, which is crucial for understanding the physical laws that govern our universe.

2. How does the metric tensor relate to the concept of curvature?

The metric tensor is used to calculate the curvature of a space. It describes the relationship between the coordinates on a curved surface and the coordinates on a flat surface. The components of the metric tensor determine how the space is curved.

3. Can the metric tensor be used in any number of dimensions?

Yes, the metric tensor can be used in any number of dimensions. In fact, it is a fundamental concept in multi-dimensional geometry and is essential for understanding higher-dimensional spaces.

4. How is the metric tensor used in Einstein's theory of general relativity?

In Einstein's theory of general relativity, the metric tensor is used to describe the curvature of spacetime caused by massive objects. The presence of mass and energy warps the fabric of spacetime, and the metric tensor is used to calculate the curvature and predict the motion of objects in this curved space.

5. Are there any practical applications of the metric tensor?

Yes, the metric tensor has many practical applications in science and engineering. It is used in fields such as physics, astronomy, and computer graphics to model and understand the behavior of physical systems. It is also essential in the design of advanced technologies, such as GPS systems and space travel.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
125
Replies
24
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
964
  • Linear and Abstract Algebra
Replies
8
Views
777
  • Linear and Abstract Algebra
Replies
1
Views
903
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
757
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
1K
Back
Top