Linear Algebra: Can't make sense of it

  • #1
847
11
Hi,

I'm currently studying some basic linear algebra, which is part of my A-Level Math curriculum, and I feel like many of the topics we're doing are very rushed through. A lot of our syllabus objectives are something like "Use this result to do X or Y" and it gets confusing for me because this leaves me oblivious as to what's going on.

For instance, we are told, when studying complex numbers that "it can be shown that":
Code:
cosθ + isinθ = e^(iθ)
so,
r[cosθ + isinθ] = r.e^(iθ)


And it makes no sense that there is no explanation as to how the initial result is even obtained! I want to know why. It's annoying to have to take the result for what it is and do stuff with it. It's because of such things that I have a hard time understanding vectors. :(

Likewise with partial fractions. We're told to just express different kinds of functions in a given form, without being told how the result is obtained. (say, A\z + (Bx + C)/y or something)

I'd rather spend more time learning a few proofs or reading more into the specific parts that are confusing me. I found that doing this, with other topics, helped me greatly build some kind of intuition to understanding the subject.
If needed, I can link the syllabus.

With that in mind, which linear algebra textbook would you recommend?
 
Last edited:
  • #2
Hi,

I'm currently studying some basic linear algebra, which is part of my A-Level Math curriculum, and I feel like many of the topics we're doing are very rushed through. A lot of our syllabus objectives are something like "Use this result to do X or Y" and it gets confusing for me because this leaves me oblivious as to what's going on.

For instance, we are told, when studying complex numbers that "it can be shown that":
Code:
cosθ + isinθ = e^(iθ)
so,
r[cosθ + isinθ] = r.e^(iθ)


And it makes no sense that there is no explanation as to how the initial result is even obtained! I want to know why. It's annoying to have to take the result for what it is and do stuff with it. It's because of such things that I have a hard time understanding vectors. :(

Likewise with partial fractions. We're told to just express different kinds of functions in a given form, without being told how the result is obtained. (say, A\z + (Bx + C)/y or something)

I'd rather spend more time learning a few proofs or reading more into the specific parts that are confusing me. I found that doing this, with other topics, helped me greatly build some kind of intuition to understanding the subject.
If needed, I can link the syllabus.

With that in mind, which linear algebra textbook would you recommend?

In terms of your exponential, use the taylor series expansions for cos(x), sin(x) and exp(x).
 
  • #3
In terms of your exponential, use the taylor series expansions for cos(x), sin(x) and exp(x).

Okay, thanks. I'll go learn about that.

For the rest, can I just pick up any book on linear algebra then?
 
  • #4
the equation cost + isint = e^it is true because both sides have the same derivatives and same initial values.

This is visible from the taylor series, or from noticing they both solve the ode

y'' + y = 0, and y(0) = 1, y'(0) = i.

In general, in order to prove something about a function, you need to know the definition of that function. What is your definition of cos, sin and e^t?
 
  • #5
Mepris, I recommend either of these books: Linear Algebra by Friedberg, Insel and Spence, or Linear Algebra by Hoffman and Kunze. Both good reads. The latter is slightly advanced but it seems like that's what you're interested in.

As for partial fractions ... the coolest proof I've seen is a proof using the idea of "basis" from linear algebra. Check out http://www.math.uwaterloo.ca/~dgwagner/MATH249/ParFrax.pdf" from my combinatorics prof's website. It's a proof of the existence of the partial fractions expansion. The idea is that the constants "on top" come from that every rational function can be expressed as a linear combination of elements which form a basis for a certain vector space. I suppose it's slightly non-elementary.
 
Last edited by a moderator:
  • #6
the equation cost + isint = e^it is true because both sides have the same derivatives and same initial values.

This is visible from the taylor series, or from noticing they both solve the ode

y'' + y = 0, and y(0) = 1, y'(0) = i.

In general, in order to prove something about a function, you need to know the definition of that function. What is your definition of cos, sin and e^t?

I have no clue what you just did up there. :|

I know how to compute y'(x) (and with that, y'(0)) if I have y(x) but I don't know what you just did. I've only been exposed to ODEs when dealing with rates of change and such.

To be honest, when I was first taught basic trig, back in years 9-12, I did think of this but then I shrugged it off and/or my teacher told me I shouldn't be worrying about it or something. Anyway, due to that and my laziness, I never bothered.

Anyway, I just looked them up on wikipedia and the definitions returned make a lot of sense. I had an intuition of what they were but I don't think I'd have been able to formulate a coherent definition unless I'd have thought very hard. Maybe that's where I'm going wrong - I'm making enough effort? Ah, I don't know!

Wikipedia said:
Trigonometric functions are commonly defined as ratios of two sides of a right triangle containing the angle.

The exponential function is the function e^x, where e is the number (approximately 2.718281828) such that the function e^x is its own derivative. The exponential function is used to model a relationship in which a constant change in the independent variable gives the same proportional change (i.e. percentage increase or decrease) in the dependent variable.

And this just shows how large the gaps in my knowledge are. At this rate, I don't think I care about my exam anymore. I'd rather just learn everything all over again the right way.
 
  • #7
I always recommenf Schaum's , because of the many solved problems; more to complement your study than as stand-alone.
 
  • #8
strictly speaking, the equation:

[tex]e^{i\theta} = \cos(\theta) + i\sin(\theta)[/tex]

isn't something to do with linear algebra, but with complex numbers. and the easiest justification of this (known as euler's formula) is by using complex taylor series.

note that if you take the derivative of both sides (with respect to θ), they are still equal (as long as you buy into the first equation), which isn't a proof per se, but it does make it seems plausible. if we call the LHS f, and the RHS g, then:

f"(θ) + f(θ) = g"(θ) + g(θ) = 0.

"trig" functions are the only (continuous and periodic) ones for which g"(θ) + g(θ) = 0 (the proof of this is too advanced to go into, here), so it "makes sense" that e should be a "trig function".

if all this talk about differential equations and taylor series seems a bit abstract to you, then just accept:

[tex]e^{i\theta} = \cos(\theta) + i\sin(\theta)[/tex] as a DEFINITION, that is, a convenient way to write points in the complex plane, that lie on the unit circle.
 
  • #9
I have no clue what you just did up there. :|
He's using an existence and uniqueness theorem for differential equations. The same theorem that's used in classical mechanics to ensure that Newton's second law F(x'(t),x(t),t)=mx''(t) has a unique solution x for each pair of equations [itex]x(t_0)=x_0[/itex] and [itex] x'(t_0)=v_0[/itex]. (If you know the position and velocity at one time, you can find the position at all times). If you don't want to just take our word for it that such a theorem exists, then I would recommend that you focus on the explanation using Taylor series instead. Here it is at Wikipedia: Link.

To be more specific about what he did, he observed that if you define f to be the function such that [itex]f(\theta)=\cos\theta+i\sin\theta[/itex] for all [itex]\theta[/itex], then f satisfies f''=-f, f(0)=1 and f'(0)=i. If you define g to be the function such that [itex]g(\theta)=e^{i\theta}[/itex] for all [itex]\theta[/itex], then g satisfies...exactly the same equation (g''=-g) and exactly the same initial condition (g(0)=1 and g'(0)=i). The theorem says that there is exactly one such function, so f=g.
 
Last edited:
  • #10
moreover since the coefficients of the taylor series are the values of the derivatives at 0, this is also why they have the same taylor series.

i.e. to prove these functions have the same taylor series, niote that they ahve the same value at 0 so they ahve the same constant term. then their derivatives also have the same value at 0, so they ahjve bthe same linear term.

then the differential equation y'' = -y, says they also have the same second derivative at 0, ...etc...
 
  • #11
Mepris, I recommend either of these books: Linear Algebra by Friedberg, Insel and Spence, or Linear Algebra by Hoffman and Kunze. Both good reads. The latter is slightly advanced but it seems like that's what you're interested in.

I second those books. They're extremely good! Hoffman & Kunze is pretty advanced, but it's worth it. I would recommend Friedberg for now.

Most linear algebra books are really bad, so watch out with what you buy. The two books above are fine books.
 
  • #12
strictly speaking, the equation:

[tex]e^{i\theta} = \cos(\theta) + i\sin(\theta)[/tex]

isn't something to do with linear algebra, but with complex numbers. and the easiest justification of this (known as euler's formula) is by using complex taylor series.

note that if you take the derivative of both sides (with respect to θ), they are still equal (as long as you buy into the first equation), which isn't a proof per se, but it does make it seems plausible.

I just did that. What I computed was:

ie^iθ = -sinθ + icosθ

After some simplification, I got this:

e^iθ = cosθ - isinθ, which, correct me if I'm wrong, does not equate to the initial formula.

Maybe I did something wrong when differentiating or simplifying.

if we call the LHS f, and the RHS g, then:

f"(θ) + f(θ) = g"(θ) + g(θ) = 0.

"trig" functions are the only (continuous and periodic) ones for which g"(θ) + g(θ) = 0 (the proof of this is too advanced to go into, here), so it "makes sense" that e should be a "trig function".

if all this talk about differential equations and taylor series seems a bit abstract to you, then just accept:

[tex]e^{i\theta} = \cos(\theta) + i\sin(\theta)[/tex] as a DEFINITION, that is, a convenient way to write points in the complex plane, that lie on the unit circle.

If I've got this right, f"(x) is the second derivative of f(x), correct? I tried to do "f"(θ) + f(θ) = g"(θ) + g(θ)" and it did get me to zero. So, I'm guessing f"(x) is indeed, the 2nd derivative. :rofl:
 
  • #13
I just did that. What I computed was:

ie^iθ = -sinθ + icosθ

After some simplification, I got this:

e^iθ = cosθ - isinθ, which, correct me if I'm wrong, does not equate to the initial formula.

Maybe I did something wrong when differentiating or simplifying.
Multiply both sides of the first equation by -i. This gives you [itex]e^{i\theta}[/itex] on the left, and [itex]-i(-\sin\theta)+(-i)(i\cos\theta)=\cos\theta+i\sin\theta[/itex] on the right.

f"(x) is the second derivative of f(x), correct?
Yes.
 
  • #14
He's using an existence and uniqueness theorem for differential equations. The same theorem that's used in classical mechanics to ensure that Newton's second law F(x'(t),x(t),t)=mx''(t) has a unique solution x for each pair of equations [itex]x(t_0)=x_0[/itex] and [itex] x'(t_0)=v_0[/itex]. (If you know the position and velocity at one time, you can find the position at all times). If you don't want to just take our word for it that such a theorem exists, then I would recommend that you focus on the explanation using Taylor series instead. Here it is at Wikipedia: Link.

To be more specific about what he did, he observed that if you define f to be the function such that [itex]f(\theta)=\cos\theta+i\sin\theta[/itex] for all [itex]\theta[/itex], then f satisfies f''=-f, f(0)=1 and f'(0)=i. If you define g to be the function such that [itex]g(\theta)=e^{i\theta}[/itex] for all [itex]\theta[/itex], then g satisfies...exactly the same equation (g''=-g) and exactly the same initial condition (g(0)=1 and g'(0)=i). The theorem says that there is exactly one such function, so f=g.

moreover since the coefficients of the taylor series are the values of the derivatives at 0, this is also why they have the same taylor series.

i.e. to prove these functions have the same taylor series, niote that they ahve the same value at 0 so they ahve the same constant term. then their derivatives also have the same value at 0, so they ahjve bthe same linear term.

then the differential equation y'' = -y, says they also have the same second derivative at 0, ...etc...

I think I got it wrong the first time round...
Ok, so I did the derivatives of f(θ) and g(θ) separately this time and yeah, now I get it!

I'll read up on Taylor Series and Euler's formula and try make some more sense of things soon. I really have to shower now, I've been at this ever since I got up and I can almost discern from which part of my body that stench is coming from! I also heard my stomach making a funny noise - I think I'll have breakfast first.

Thanks a lot guys.
 
  • #15
Yet another way of deriving the identity:

ez , the complex exponential map, assigns to a number z the point with polar coordinates given by z , with length given by ex, where x is the real part of z, so that , e assigns to the polar coordinate iθ the points in the plane with polar coordinates (1=e0,θ), which is a point in the circle of radius 1 forming an angle of θ ; this is the point: cosθ+isinθ.


* Given a choice of branch, etc., but forget that for now. The complex log does the opposite, by assigning to a number ( one of its) polar coordinates representations.
 

Suggested for: Linear Algebra: Can't make sense of it

Replies
10
Views
204
Replies
3
Views
667
Replies
3
Views
280
Replies
5
Views
839
Replies
5
Views
354
Replies
6
Views
1K
Replies
9
Views
379
Replies
19
Views
830
Replies
6
Views
307
Back
Top