You can split matrices into blocks (they must be the appropriate sizes so that the multiplication is defined) and multiply them. It's quite helpful in some proofs and helps with notational issues.
Look http://en.wikipedia.org/wiki/Block_matrix#Block_matrix_multiplication".
And yes, look...
If you compute what you think are an eigenvector and eigenvalue pair, stick them back in! They had better satisfy Ax = \lambda x since that is, after all, the equation whose solutions you were looking for in the first place.
The variable x_2 should equal t, if that's what you're asking.
It's hard to know what exactly to recommend when I don't know your background.
There is a subforum over in Academic Guidance called Science Book Discussion where they will recommend books. Maybe you should post over there. Or, I'm sure there will probably be some results that pop up if you...
Hmmm, perhaps HallsofIvy has something else in mind that is simpler than my plan of attack. I'm not sure how to solve this problem by just looking at the spectral radius though, so I can't help with that.
I'll give you a quick rundown on what I was getting at, and I'll provide a few...
Thank you, but I can't take credit for it. I got that out of a GRE study guide. It was the same guide that started with things like basic analytic geometry, trig identities, and logarithms...and ended with stuff like Lebesgue measure, point-set topology, and group theory. Evidently I remember...
It's not meant to be an ironclad argument. Of course you can poke holes in it. But you're missing the point. If a Calc 1 student points at the definition and asks, "Why?," you could show them this. If they ask a lot of questions, then it's a good segue into discussing their future enrollment in...
If all we are after is some intuition (nothing rigorous) behind the definition
e = \lim_{x\to \infty} \left( 1 + \frac{1}{x}\right)^x,
here's a nice little argument...
Our goal and motivation: find a base a such that d/dx (a^x) = a^x.
From the definition, we have
a^x =...
Suppose X \subseteq A - \left\{a,c\right\} and X \cup \left\{a\right\} \in F.
Assume b \in X. Then ( X \cup \left\{a\right\}) - \left\{b\right\} \subseteq A - \left\{b,c\right\}. Also, ( X \cup \left\{a\right\}) - \left\{b\right\} \cup \left\{ b\right\} = X \cup \left\{a\right\} \in F...
That term, in fact, is not equal to \epsilon/2, it's merely less than it.
Try verifying the inequality
|y_0| \cdot \frac{ \epsilon}{2(|y_0|+1)} < \frac{\epsilon}{2}.
Spivak's book was my first introduction to rigorous calculus too. In these proofs I remember trying to equate everything...
Another way to look at this: Are you familiar with the fact that every complex square matrix is unitarily equivalent to an upper triangular matrix and the properties that are invariant under such an equivalence?
For (a), perhaps you should look into the square root of a matrix and under what conditions it is unique.
For (b), you can solve this by partitioning A into appropriately sized blocks and carrying out block multiplication. And remember that A is symmetric! You'll need that fact to finish the...
There are a variety of ways to show that a matrix is invertible: show it's determinant is different from zero, show that it has full rank, show that the equation Ax=0 has only the trivial solution, etc. etc. However, we're going to construct a matrix that will send I-BA to the identity, and...
For future reference, start a new thread when you have a new question. You'll get a lot more traffic and lots more help. :smile:
EDIT: I wrote a big long post, but then forgot to ask the most important question first! Can you show me what you've tried so far?