That's fair. I'm just trying to visualize what expanding a vector into "more dimensions" really 'looks' like. A tangent vector being represented as something like an arrow from one point on a surface outward onto a tangent plane (or higher-dimensional equivalent) makes sense. A covector being an...
First off, thanks for fixing my formatting. I knew about the curly brackets, I just missed them when I was typing. I keep getting tripped up on the inline formatting with #'s and $'s, though. As for the question itself, I think I get it. If a (0,2) tensor is a covector of covectors, and you sum...
I've been reviewing some introductory tensor stuff, and I've come to the realization that some of the things tensors do confuse me. For example, the notes I'm reading say that the invariant interval is both ##S=\eta_{\mu\nu} x^\mu x^nu## and ##S=x^T \eta x##. Both of which are totally fine on...
Alright. In that case, if local flatness just means that a metric's Christoffel symbols are 0 to first order (on a small enough scale), wouldn't that mean they'd be just as much Minkowskian as any other totally flat metric? For example, #\delta_{\mu\nu}# has 0 for all of its derivatives, but is...
Fair enough. In that case, though, what exactly *does* locally flat/Minkowskian mean? It seems to me like the Minkowski metric can be anything (that follows the requirements of a metric like being symmetric and such) at a single point (i.e. rather than indices being dependent on coordinates like...
Thank you, but I think I found them myself. When you get a chance (seriously, no rush), is this a copy of the right one: https://arxiv.org/pdf/gr-qc/9712019
Chapter 2 is the only one that discusses local flatness by name (thanks, ctrl+F!), and it shows that proof that expands the metric...
I'm having trouble understanding the local flatness of GR. So far, my interpretation was that it meant that the metric tensor at an infinitesimal point in spacetime will be equal to some multiple of the Minkowski metric since that's the metric that preserves the speed of light/spacetime...
I see. It makes more sense when I look at it like this: The columns of a matrix transformation represent where the standard basis vectors "go". If a different matrix were to transform the same vectors to the same place, each column would be identical to the first case, and so the transformation...
Alright, I think I’m starting to get how this works (also, thanks to pasmith who explained it like this). In short: yes — if you’re given a set of eigenvalues and their corresponding eigenvectors, you can uniquely determine the matrix that has them.
The intuitive reason is that when you express...
I went back and checked, and they definitely do share eigenvalues. Double checking the math (and also doing the reverse calculation with an eigenvector calculator just in case my algebra was wrong) and the LHS *does* equal the RHS, and the diagonal matrix has the same eigenvalues as the one on...
This is very helpful. One thought I had about the diagonal matrix is this: since the eigenvectors in the matrix #P# are aligned with the order of the eigenvalues in the diagonal matrix #D#, changing the order of the eigenvalues in #D# would also require rearranging the columns of #P# which in...
Just to check, if a matrix has an inverse, that means that its transformation is necessarily one-to-one, right? Meaning, if we transform some vector from the standard basis -> eigenvector basis -> standard basis, both transformations can only result in one possible vector.
I think I may be miscommunicating a little. For this specific example, what I'm trying to check for is whether $$\begin{bmatrix}1&4\\1&1\end{bmatrix}=\begin{bmatrix}2&-2\\1&1\end{bmatrix}\begin{bmatrix}3&0\\0&-1\end{bmatrix}\frac{1}{4}\begin{bmatrix}1&2\\-1&2\end{bmatrix}$$ is sufficient to show...
What I meant by 'general' is that we use scalar parameters like a and b in the place of indices in the final matrix, eigenvalues, etc. I would imagine that's just as valid as doing diagonalization with eigenvectors/values with defined numbers since you can set the parameters to any such number.
Are you sure? I'm pretty sure that ##\begin{bmatrix}1&4\\1&1\end{bmatrix}## has the eigenvectors ##\begin{bmatrix}2\\1\end{bmatrix}## and ##\begin{bmatrix}-2\\1\end{bmatrix}##, which would be the c=2 case, even though those two vectors aren't orthogonal to one another.