Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Using Lie Groups to Solve & Understand First Order ODE's

  1. Jun 14, 2013 #1
    Hey guys, I'm really interested in finding out how to deal with differential equations from the point of view of Lie theory, just sticking to first order, first degree, equations to get the hang of what you're doing.

    What do I know as regards lie groups?

    Solving separable equations somehow exploits the fact that the constant of integration [tex] C \ = \ y \ - \ \int f(x) dx[/tex] is a one-parameter group mapping solutions into solutions, & further that the method of change of variables is (apparently?) nothing more than a method of finding a coordinate system in which a one-parameter group of translations/rotations/...??? is admitted so that separation of variables is possible (not sure if that's only what SoV is good for, that just seems to be the implication!).

    Solving Euler-Homogeneous equations somehow exploits the fact that the differential equation y' = f(y/x) admits a group of scalings, T(x,y) = (ax,ay), as in this link (bottom of page 23), thus because of this one can use Lie theory to solve these equations as well.


    What am I asking for?:

    I've tried to teach myself this material a while ago & failed, built up a bit of a mental block when trying again & failed again, went of asking grad students & professors who hadn't come across much of this material & so am now here with another attempt, basically all I need is for someone to explain what's going on with lie groups in general in light of what I've said I know about them & to kind of give the intuition behind what the general process is, how powerful it is etc... I was thinking maybe along the lines of the first chapter of this book, but whatever you think really, would just be good to have someone to ask questions of who knows this stuff!
     
  2. jcsd
  3. Jun 15, 2013 #2
    I'm not an expert but I do have a handful of books on symmetry analysis. They should be available in the (university) library and the first two mentioned below I found suitable for self-studying.
    Peter Hydon (1) has written a very nice introduction to symmetry analysis. it is a very compact book and only covers some aspects but especially the first couple of chapters are a good read. The book of Hans Stephani (2) is also very good and he treats some subjects in more detail. I like the way they explain things. The book of Bluman and Kumei (3) is a very important classic but I find it harder to read, especially for self-studying. It's also a more proof-based mathematics book, which makes it more difficult to read if you want to understand all the proofs. I highly recommend to study these books and in this order: 1->2->3 or 2->3 or maybe just 2.

    The idea is that a (local point) symmetry of an ode will be able to reduce it's order, and for first order the ode can be reduced to quadrature. Unfortunately, for first order ODE's there is no systematic way of finding a symmetry, even though you can prove that infinitely many exist.
    But you can work backwards: You can find out what the most general ODE is that has a certain symmetry.
    So you classify first order ODE's and use a transformation based on the known symmetry of that ODE class, like what you would do for a Bernoulli ODE for instance.
    It gets more complicated for the Riccati ODE, but you've probably noticed that already (to find the symmetry you have to solve the Ricatti ODE first).

    Edgardo Cheb-Terrab has written a number of papers, some of them are online on arxiv. He has written a large part of the ODE solvers for the Maple software package, based on symmetry analysis. They are very good papers and he explains very well how you can use symmetry analysis to systematically solve ODE's.

    I was also shocked to learn that 1. there is such a thing as symmetry analysis and 2. that nobody seems to know about it. It is the most powerful tool for solving nonlinear ODE's (for linear ODE's we have differential Galois theory) and it is the tool that connects all ODE solving 'tricks'. One ring to bring them all and in the darkness bind them. But maybe Tolkien wasn't talking about Lie rings... hmmm...
     
  4. Jun 15, 2013 #3

    Stephen Tashi

    User Avatar
    Science Advisor

  5. Jun 15, 2013 #4
    Ages ago I read the first few chapter's of Emanuel & that is the reason I'm creating this thread - it's the one that gave me the mental block!

    It was a long time ago & I really wasn't ready for it [always a good time to to learn something new!] but now I bet I'd speed through it, I just created this thread as a means to get back into the subject with a bit of intuition. I was probably reading that book on my own about the same time your thread was created & am surprised I never found it as I would have killed to find someone reading Emanuel back then! I see you've at least read to chapter 5 off Emanuel but I wager the book was also a bit over your head as well back then, have you finished it or are you still up for going through it? We could use this thread to sum each chapter of the book up in our own words & vent any frustrations etc...

    Thanks for the links, I'd checked all these books (& more) after reading Emanuel but the mental block was too much for me back then & I had so much other stuff to do that this side-project was eventually put on hold. Maybe if we go ahead with this idea for reading Emanuel & you see any little things we post that can be added to by referencing one of the books you've mentioned that would be great (but knowing me I'll need to refer to one or another of them soon enough :p)
     
  6. Jun 15, 2013 #5

    Stephen Tashi

    User Avatar
    Science Advisor

    I haven't read Emanuel's book - being a retired guy, I'm very busy and I only attend to those parts of the World where it's willing to supply me with adequate motivation. Yes, I'd like to go through the book and post in a thread about it. I don't know if we should use this thread to do it. I think there is a section of the forum dedicated to particular books - however, I don't usually visit it.

    The only mental block I have about concrete treatments of Lie groups is that they all use the same time honored notation for the 2D case and I don't like the letters they use. I'd prefer to see a notation that uses subscripts to show whether a thing applies to the x or y coordinate.
     
  7. Jun 15, 2013 #6
    Great stuff! While I think the textbook section is more just general discussion about the books, if a mod wants to transfer our posts from here into a thread on Emanuel's book that's cool with me - we can't do it ourselves as we can't create threads in that part of the forum.

    I think the best way to do this is to use the Feynman method, i.e. act is if you were teaching someone else the theory. One way we could do this is by writing up our thoughts on a chapter, chapter by chapter, & add our own thoughts, ideas, questions etc... Another way we could do it is if one writes something up & posts it the other can just add their comments on it, & take turns or whatever. Another way is to use two different but similar books & write up our thoughts on each (i.e. Emanuel & Cohen since Emanuel says he follows Cohen closely, but Cohen is so old so it's bound to contain clarity!). Whatever you think really, I'm open to none, all or more suggestions :tongue:
     
  8. Jun 16, 2013 #7

    Stephen Tashi

    User Avatar
    Science Advisor

    Let's begin without a plan. I'll find where I put the book tommorrow and post something about it. I think I have the old Cohen book somewhere too. It may take longer to find.

    Tonight, I'll just post some uniformed speculation. Maybe some other forum member will offer to reform me.

    I gather that the high class way to think of physics is to think of a "phase space" (if that's the right term.)

    A low class way to think of the 1-dimensional "falling body" is to think of it as one particular problem. In that way of thinking, we are given the mass of the body, the position of the body at some time (usulall t = 0,) the velocity of the body at some time and (assuming a constant acceleration due to gravity) we solve a simple differential equation by doing integration and find the formula for the position and velocity of the body at subsequent times. Since the physics is reversible we can also find the position and velocity of the body at previous times.

    The high class way to think of the falling body problem is to think of a space (m,x,v,t) consisting of all possible falling body problems. In general, different falling body problems have different answers. But there will be sets of problems in this space that have the same answer. For example there willl be some point (m1,x1,v1,t=0) that has the same answer ("answer" = formulas for x and v) as the point (m2=m1,x2,v2,t=5) because the answer for (m1,x1,v1,t=0) will predict that the state of a body at time t=5 will be (m1,x2,v2,t=5).

    From this abstract point of view, we can define a transformation of the space into itself that is a function of one paramter, namely time. We let U(T) be the transformation that sends (m,x1,v1,t1) to point where the body is predicted to be at time T later. So U(T) "acting" on (m,x1,v1,T) = (m,x2,v2,t+T) where x2 and v2 are the position and velocity predicted for t + T by the answer to the falling body problem with initial conditions (m,x1,v1,t).

    I think the transformations U(T) are a 1-parameter group. U(0) is the identity transformation. The multiplication U(T1)U(T2) is interpreted as applying U(T2) first and then applying U(T1) to its result. This amounts to the same thing as U(T1+T2). U(T) has an inverse transformation of U(-T). (Some part of this argument must depend on the fact that physics tells us that U(T) is a 1-to-1 transformation on the space. Intuitively, this is because a given falling body doesn't have two different answers. Associativity is just a matter of definition U(T1)( U(T2) U(T3)) and (U(T1) U(T2)) U(T3) are both defined to amount to applying the transformation ins order from right to left.


    So there is a Lie group that is related to differential equations.
     
  9. Jun 16, 2013 #8
    Cool, without a plan & uninformed speculation is good with me :cool:

    That's a nice way to look at a basic physics problem, & just based off it I already see a lot more clearly how Noether's theorem can be understood in terms of lie groups, i.e. if your problem was invariant under time translations we'd have conservation of energy etc... Obviously one goal of lie groups for me will be to prove Noether's theorem using them!
     
  10. Jun 16, 2013 #9

    strangerep

    User Avatar
    Science Advisor

    I'm interested in this (or possibly merely related) topic in the context of finding maximal symmetry groups for dynamical equations in physics. (E.g., lurking therein is a "3rd way" to "derive" special relativity by finding all such dynamical symmetries of the equation of free motion.)

    Anyway,... the only textbook I've (partially) studied is this one:

    P. J. Olver, "Applications of Lie Groups to Differential Equations", Springer, 2nd Ed.
    https://www.amazon.com/Applications...=sr_1_1?s=books&ie=UTF8&qid=1371439527&sr=1-1

    I haven't yet looked at the other textbooks mentioned earlier in this thread, so I'd be interested if anyone who's read Olver as well as the others can tell me where Olver fits in the heirarchy? I.e., is Olver more/less difficult than the others? Different/expanded subject range? Etc?

    (BTW, I got the feeling from Olver that there's still a lot of open problems and unexplored territory here, since papers continue to be published on the subject. There are some computer programs for finding the equations that must be solved to find the Lie algebra generators -- which is the relatively easy part, imho, -- but the task of solving the resulting coupled PDEs is much more tedious.)
     
  11. Jun 17, 2013 #10
    For me I would definitely think Olver is too much - over 130 pages of lie groups, manifolds, forms etc... all leading up to 6 pages on first order ode's tells me I'll have no idea how to solve any potential type of first order ode after all that. I feel as though I'd be falling foul to my favourite mathoverflow quote:

    While I'd definitely want to know the subject from the perspective Olver takes, I couldn't do it until I knew the classical way of approaching it, akin to the way I wouldn't like to know ode's on manifolds ala Arnol'd until I'd learned all the classical tricks I could to be sure I could get by.

    But this could be even more interesting if you wanted to study that book in concert with us we could all get the best of both worlds, cover all bases :cool:
     
  12. Jun 17, 2013 #11

    strangerep

    User Avatar
    Science Advisor

    Well, ok, I'll keep an eye on this thread.

    But I don't have my own copy of Emanuel, and the price for a new copy is a bit steep: around USD 158 on Amazon. The vendors offering "used" copies at more reasonable prices don't offer international shipping. So I'll just have to follow along without what extracts Amazon and Google Books will let me read online.
     
  13. Jun 17, 2013 #12

    Stephen Tashi

    User Avatar
    Science Advisor

    Solution of ODEs by Continuous Groups by George Emmanuel

    Let's start simple.

    Meditation 1 "Differential Equations"

    Chapter 1, p3:

    The Leibnitz notation is an impediment to understanding things precisely. It's difficult to answer simple questions about it what it means. For example:

    Functions have domains and ranges and equations have solution sets. An equation is a propositional function whose range is the set of two values {True, False}. A solution to an equation is a value in the domain of the propositional function that makes the function return True.

    If a solution to a differential equation is a function, why does this differential equation have a "function of its three arguments"? Shouldn't it be a proposition function that whose domain is a single function represented by a single argument?

    Is this a propositional function whose domain is a set of functions or is its domain a set of pairs of functions [itex] x [/itex] and [itex] y [/itex] ?


    I wrote answers to a few such questions. This topic might be too elementary to be of interest, so I won't post those thoughts now.

    Mediation 2: "Separation Of Variables"

    "Separation Of Variables" is defined on page 3 as success in manipulating the differential equation into a certain form. I'm used to the context where "separation of variables" means expressing a function [itex] f(x,y) [/itex] as [itex] h(x) g(y) [/itex]. This context involves a function of two variables. So can we relate this context to the "separation of variables" method in manipulating ODEs?

    A purely mathematical digression is the question of whether there is useful way to define a more general "separation of variables". For example if can write [itex] f(x,y) [/itex] as the sum of two functions [itex] r(x) + s(y) [/itex] then we have, in a manner of speaking, separated the variables. A generalized definition would be "A separation of variables of the function [itex] f(x,y) [/itex] of two variables is a binary operation [itex] B [/itex] , a function [itex] g(x) [/itex] of [itex] x [/itex] alone and a function [itex] h(y) [/itex] of [itex] y [/itex] alone such that [itex] f(x,y) = B(g(x),h(y)) [/itex]". I wonder if that leads to anything interesting.

    Meditation 3: The Book In A Pea Shell

    The chapter gives a summary of the content of the book. It says that if a differential equation is "invariant" under the tranformations defined by a (continuous) group then this reveals what substitutions we can make to transform the differential equation into a separable differential equation. If it teaches me that, I'll be happy.
     
  14. Jun 17, 2013 #13
    On Meditation 1: "Differential Equations"

    Since we're mainly working with functions of only one or two variables, there are three possible representations we're going to have to be fluent with in dealing with this stuff, there is no real issue of one being more fundamental than the other really as far as differential equations are concerned & I can think of situations where we're gonna need all three... Furthermore I think you're using a definition of function used in logic whereas these definitions are actually valid if you think in terms of axiomatic set theory & not dumbed down or high-school bastardizations of concepts or anything, something I can justify if you really want to get into the nitty gritty :tongue:

    As a consequence of this perspective of functions from three viewpoints we can understand the solution of differential equations of the form

    [tex]\frac{dy}{dx} \ = \ f(x,y)[/tex]

    as finding an explicit function that acts as a solution, & solving

    [tex] M(x,y)dx \ + \ N(x,y)dy \ = 0[/tex]

    as finding an implicit (one parameter family of) function(s) that acts as a solution. The craziest implication of this, however, is given under "Lesson 7: Stay away from differentials" in this essay which berates the Leibniz notation pretty badly yet it illustrates the deep relationship of the parametric perspective of functions to the other two & offers an interpretation of what the notation actually means via trajectories & vector fields - thus all three methods have some real value! In fact, already just by thinking in terms of different representations of functions we've derived a geometric interpretation of integrating factors! Lets see if we can use this lie theory thing to see if we can shed any light on this picture, or get a lie-theoretic version of it.

    On Meditation 2: "Separation of Variables"

    In the context of ODE's, separation of variables is literally always either defined via the explicit representation as stating that y' = f(x,y) is separable if f(x,y) = g(x)h(y), or in the implicit representation as stating that M(x,y)dx + N(x,y)dy = 0 is Separable if M(x,y)dx + N(x,y)dy = 0 = A(x)B(y)dx + C(x)D(y)dy = 0. Emanuel acknowledges this by stating that the general first order ode f(x,y,y') = 0 is separable if it can be reduced to the form M(x,y)dx + N(x,y)dy = 0, now it's obviously abuse of notation to write dx & dy terms & should be phrasing everything in terms of differential forms if we want to be rigorous but it's an abuse of notation that, according to the article I linked to above, actually encodes the parametric definition of a function within it, & is extremely useful when deriving integrating factors allowing us to solve ode's, thus we'll have to live with it :tongue:

    As regards separation of variables having any general kind of definition, page one of this paper & the links therein should indicate there isn't a finished definition yet, however there are books on applying lie theory to pde's that I'm using this thread to get towards that approach the topic of separation of variables as best as is theoretically possible (as far as I know, more on this later!). I'd imagine you're mentioning the additive separation of variables as being motivated by something like the additive method used in the Hamilton Jacobi equation, that's the only place I've ever seen it used so far, I have no idea if it would work for ode's so if you can find an example of it working I'd love it!

    I'll try & get back with something more substantive asap :cool:

    Cool, definitely do keep an eye on it. Emanuel's book is extremely similar to the by Cohen I linked to so that's a good option if you're interested.
     
  15. Jun 17, 2013 #14

    Stephen Tashi

    User Avatar
    Science Advisor

    Those approaches are already too imprecise to satisfy a stickler like me. I'll post a thread in the General Math section (someday) about how to precisely define a "differential equation" instead of digressing on it here.

    Let's show the two definitions are equivalent - if they are.

    I like Rota's paper http://www.ega-math.narod.ru/Tasks/GCRota.htm that you linked. I don't know how to view equations like [itex] M(x)dx + M(y)dy = 0 [/itex] in the context of differential forms.



    I don't understand that paper, but I do understand that my definition is an utter failure. The problem with mine is that a function f(x,y) of two variables "is" a binary operation. Namely that it can be used to define the binary operation B(x,y) = f(x,y). Thus f(x,y) = B(h(x), g(y)) where h and g are both the identity function. Perhaps the search for a good generalization of "separation of variables" must focus on using "simple" binary operatioins, however we can define those.


    Since the group invariance is going to reveal the proper substitutions to make, it would be useful to understand if using the technique of substitution just amounts to changing coordinates. Does it? Or are there some technicalities?
     
  16. Jun 19, 2013 #15

    Stephen Tashi

    User Avatar
    Science Advisor

    Solution of ODEs by Continuous Groups by George Emmanuel

    Chapter 2 Continuous One-Parameter Groups-I

    Meditation 4: Group Concept


    [ Emmanuel doesn't explain many of the concepts about groups that are emphasized in a course on group theory, so apparently they aren't needed. I'll digress to cover a few of them , as a review for my own sake.]

    I memorized the chant "closed, associative,identity, inverse" when I first encountered groups.

    I prefer to think of a group as set of 1-to-1 functions from some set (or "space") onto itself. The group operation is composition of functions. So the group operation, which we denote as if it were multiplication [itex] (f)(g) [/itex], is [itex] f(g(x)) [/itex]. (Sometimes people prefer to define it "backwards", so that [itex] (f)(g) [/itex] means [itex] g(f(x)) [/itex]. Let's not do that. )

    The mathematical definition of a group is more abstract than this way of thinking. A group has a set of elements and these can be arbitrary things - they don't have to be functions. A group has a binary operation defined on it that need not be defined using the composition of functions. "Closed, associative,identiy,inverse" is a chant for remembering what properties the set and the operation must satisfy.

    Emmanuel's approach is to state the abstract definition of a "group" and then focus his attention on "groups of transformations". "Transformation" is just another word for "function", so thinking of a group as a set of functions is consistent with his approach.

    Also there is a sense in which nothing is lost by thinking of groups as sets of functions. A result called "Cayley's Theorem" says that any abstract group can be exactly imitated by some group of functions that are 1-to-1 mappings of some set onto itself. Of course it doesn't actually say "exactly imitated by", it says "is isomorphic to", but I haven't defined "is isomorphic to" yet in this article.

    Thinking of the group operation as the composition of functions makes it obvious that the operation (even though it is customarily called "multiplication") need not be commutative. It's clear that [itex] f(g(x)) [/itex] and [itex] g(f(x)) [/itex] can be different functions.

    To be a group, a set of 1-1 functions [itex] G [/itex], can't be just any arbitrary set of 1-1 functions.. There have to be enough of them to satisfy the properties of a group.

    "Closed": if [itex]f [/itex] and [itex] g [/itex] are any two functions in [itex] G [/itex] then [itex] f(g(x)) [/itex] also must be in it. Notice the condition that the functions map of [itex] G [/itex] map "some space onto itself" is important. If both [itex] f [/itex] and [itex] g [/itex] mapped apples to oranges then [itex] f( g(x)) [/itex] wouldn't be defined since it gives [itex] f [/itex] the job of mapping an orange to something.

    "Associative": This holds for composing functions.. If you think about what is done to evaluate [itex] ((f)(g))(h) [/itex] vs [itex] (f)((g)(h)) [/itex] , you see that the only choice in both cases is apply [itex] f,g,h [/itex], working from right to left, so to speak. It doesn't matter whether the functions are mappings of the real numbers, or points in 2-D space etc., you still apply the functions in that order. You begin by finding [itex] h(x) [/itex] then do [itex] g(h(x)) [/itex] then do [itex] f(g(h(x)) [/itex].

    "Identity": [itex] G [/itex] must contain the identity function. One important consequence of this is that if you have the thought "Lets divide the group [itex] G [/itex] into two non-overlapping smaller groups", you are out of luck. The identity is a unique element of [itex] G [/itex]. (This can be proven.) So if you divide [itex] G [/itex] into two non-overlapping sets, only one of them has the identity in it. The other one can't be a group.

    "Inverse": For any function [itex] f [/itex] in [itex] G [/itex], [itex] G[/itex] must also contain [itex] f^{-1} [/itex] the inverse function of [itex] f [/itex]. Since were assuming 1-to-1 functions, there is no problem with the existence of [itex] f^{-1} [/itex], but you must make sure [itex] G [/itex] contains [itex] f^{-1} [/itex].

    A cheap way to create a group is to pick a set [itex]\Omega [/itex] and say "Let [itex] S [/itex] be the group consisting of all 1-to-1 functions that map [itex] \Omega [/itex] onto itself". (Sophisticated people will understand that you mean the group operation to be defined as the composition of functions.)

    The cheap way, makes it easy to verify that the set of functions satisfies "closed, associative,identity,inverse". For example if [itex] f [/itex] and [itex] g [/itex] are 1-to-1 functions in [itex] S [/itex] then [itex] f(g(x)) [/itex] is defined and also a 1-to 1 mapping from [itex] \Omega [/itex] onto itself. So [itex] f(g(x)) [/itex] must be in [itex] S[/itex] since we said [itex]S [/itex] contains all such 1-to-1 functions. Thus you have the "Closed" property handed to you.

    If there are a finite number of elements in a group we say the group is a "finite group". If we have a finite group of functions, the term "finite group" means that the group has a finite number of functions. It doesn't imply that the domain and range of the functions is finite in any respect.. It also doesn't mean that the functions are bounded in some way.

    I mentioned that we lose nothing by thinking of a group as a set of 1-to-1 functions that map some space [itex] \Omega [/itex] onto itself. For finite groups there is a more specific result:

    Any finite group [itex] G [/itex] is exactly imitated by some group of functions that are 1-to-1 mappings of a finite set set [itex] \Omega [/itex] onto itself.

    Note: this doesn't say you must use the group all possible 1-to1 mappings of [itex] \Omega [/itex] onto itself. It's possible to have a group of 1-to-1 functions mapping [itex]\Omega [/itex] onto itself that has fewer than "all possible" such functions.

    For example, let [itex] S [/itex] be the group of all possible 1-to-1 mappings of the set [itex] \{\ 1,2,3,4 \ \} [/itex] onto itself. ( [itex] S [/itex] is called the "symmetric group" on that set.) Let [itex] H [/itex] be the set of 1-to-1 functions of the set [itex]\{\ 1,2,3,4 \ \}[/itex] onto itself that map the element [itex] 4 [/itex] to itself. It turns out that the functions in [itex] H [/itex] also form a group. [itex] H [/itex] has fewer functions in it than [itex] S [/itex].

    I'll continue this meditation in another post and defined "exactly imitated". I conclude this post by tell you about some disagreeable stuff we've skipped.

    One of the painful adjustments that students must make in group theory is to learn a new definition for "permutation". A permutation (in group theory) is defined to be a 1-to-1 function of a set onto itself. So, in group, theory, a permutation is no longer "an arrangement of n distinct objects". Since a permutation is a function, we can talk about multiplying permutations together, because we can compose two functions and multiplying two permutations is defined as composing them as function

    The poor student who is longing for the days when a permutation was an arrangement of things gets further confused by the shorthand notation used to describe a permutation as a function. The notation somewhat looks like it gives an arrangement of things, but if you try to interpret it that way you get hopelessly lost.

    The compensation for this, is that the student can rephrase the result about finite groups given above to sound more imposing. It becomes:

    Any finite group can be exactly imitated by some permutation group on a finite set [itex] \Omega [/itex].

    We just used the phase "permutation group on a finite set [itex] \Omega [/itex] " instead of saying "group of 1-to-1 functiond of a finite set [itex] \Omega [/itex] onto itself".
     
  17. Jun 19, 2013 #16
    Judging off your explanation of permutations I see that you've read this Arnold essay :wink:

    If anybody reads this thread, has read Arnold's ODE's book & is interested in contributing it would be amazing if they could explain in a bit of detail how Arnold's exposition of lie theory in there relates to what we're doing, or any other advanced book really :cool:
     
  18. Jun 19, 2013 #17
    Cohen Ch.1 Part a: "Transformations"

    Structure of Cohen's Book:
    Cohen's book is titled "An Introduction to the Lie Theory of One-Parameter Groups". The introduction says that a knowledge of ode's is not strictly necessary for this book thus it should be fine for people who are only learning ode's to read along - hopefully this thread will make it easier for people learning ode's! My favourite thing about this book is that the intro says it retains Lie's original proofs & mode of presentation to a large extent! The hope is that translating this stuff to manifolds will become a simple exercise in formalism & notation. It does some basic theory, then first order ode's, some second order ode's, linear first order pde's then more second order ode's.

    Structure of Chapter 1:
    The chapter is broken up into 11 sections, but really I think there are only two topics discussed, the first being "transformations" & the second being "invariants", thus I'll post on transformations first.

    Chapter 1: "Lie's Theory of One-Parameter Groups"
    01.01 - Groups of Transformations
    Motivation for Lie Group of Transformations
    In this motivation section I'll go through Cohen's explanation, point out how it differs from the modern definitions & go through issues of notation etc... I think it'll be fascinating to see the history & see if we all understand it, call me on, or add, anything you can!
    Cohen says that a set of transformations constitutes a group if:
    "the product of any two transformations is equal to some transformation of the aggregate".
    In other words, according to Cohen the transformations

    [tex]x_1 \ = \phi(x_0,y_0,\alpha), \ y_1 \ = \psi(x_0,y_0,\alpha)[/tex]

    form a group if given

    [tex]x_2 \ = \phi(x_1,y_1,\beta), \ y_2 \ = \psi(x_1,y_1,\beta)[/tex]

    we have that:

    [tex]x_2 \ = \phi(x_1,y_1,\beta) \ = \phi(\phi(x_0,y_0,\alpha),\psi(x_0,y_0,\alpha),\beta) \ = \phi(x_0,y_0,\gamma(\alpha,\beta))[/tex]

    [tex]y_2 \ = \psi(x_1,y_1,\beta) \ = \psi(\phi(x_0,y_0,\alpha),\psi(x_0,y_0,\alpha),\beta) \ = \psi(x_0,y_0,\gamma(\alpha,\beta))[/tex]

    He then labels the transformations by Tα etc... & rephrases the above as TβTα = Tγ (Actually he does it in the reverse order which Stephen mentioned we wouldn't use because it sucks!). In a comment he says that φ & ψ are real-valued analytic functions of the variables & further says they are independent w.r.t. x & y, as in they're not functions of each other. Notice though he doesn't gives an actual definition of a group, it's more like he's saying that this set of functions forms a group because of reason X, or it could just be because the book is so old... This definition looks to me like it encodes closure only, he could be relying on the fact that the set of functions satisfies associativity trivially, something that makes a lot of sense since Emanuel stresses the point that we'll never need to check associativity, or he could just be using an earlier definition of a group which actually relies on the structure of the set of functions under composition... It's extremely interesting though that he gives this as his starting point because this is basically the definition of a one-parameter lie group of transformations that we'll actually be using, more or less, whereas it seems here that he's actually saying this is the (1911!) definition of a group! I'm not sure, in any care this is just an interesting side-note.

    Then he discusses the concept of an inverse & calls a group like the above with an inverse a Lie group. Thus if the transformations
    [tex]x_1 \ = \phi(x_0,y_0,\alpha_0), \ y_1 \ = \psi(x_0,y_0,\alpha_0)[/tex]
    can be put in the form
    [tex]x_0 \ = \phi(x_1,y_1,\alpha_1(\alpha_0)), \ y_0 \ = \psi(x_1,y_1,\alpha_1(\alpha_0))[/tex]
    we're dealing with a lie group according to these definitions. This tallies with the modern definition as far as I can see based on the implicit assumption of analyticity & the fact everything is real here, but things can be far more general & again we'll have to be more careful with our definitions (though you have to love these classical definitions in the fact that everything's so natural!).

    Finally since we've allowed inverses if we can perform two mutually inverse transformations we get the identity. In other words there must always exist a parameter value δ such that
    [tex]x_1 \ = \phi(x_0,y_0,\delta) \ = \ x_0, \ y_1 \ = \psi(x_0,y_0,\delta) \ = \ y_0[/tex]
    He then notes in a comment that there exist groups for which this is not possible, but they wont be considered here. It could be because he's taking a lot for granted that he shows the identity axiom as if it were a trivial consequence of his construction, or else it could be that the identity axiom is actually not part of the definition of a classical group, but either way you have to love how natural the identity axiom falls out of this, even though the modern definitions in group theory would place the identity axiom before the inverse axiom (i.e. magma ---> semi-group ---> monoid ---> group).

    Now, how do we reconcile the with modern definitions?
    The above construction implicitly encodes three types of mathematical structure (as discussed on this page). The group structure is encoded in the entire explanation, albeit in a weird way... I don't see any mention of associativity, his definition seems like it's based on closure. However it also seems like he never even defined a group so it could just be that he is not defining groups he's giving an example & omitting axioms, relying on the set structure as obviously implying associativity, who knows... The topological structure is encoded in the continuity of φ & ψ & their inverses. In terms of modern group theory, this extra structure is not a trivial addition, invites a world of complexity! The manifold structure is encoded in the analyticity of φ & ψ, another monster of complexity... Luckily the stuff that translates to manfolds for us will be just basically calculus so no need to worry. Thus I found a great definition that will work for us without defining manfolds or topological groups in Bluman that suits our needs & is still perfectly rigorous.

    Definition of Lie Group of Transformations

    The set S of mappings of the form
    [tex] T \ : \ \mathbb{R^2} \ \times \ M \ \rightarrow \ \mathbb{R^2} \ | \ (\vec{x}_0,t) \ \mapsto \ T(\vec{x}_0,t) \ = \vec{x}_1 [/tex]

    form a one-parameter Lie group of transformations, with respect to the group (M ⊆ ℝ,ψ),
    under the operation
    [tex] \phi \ : \ S \ \times \ S \ \rightarrow \ S \ | \ (T_1,T_0) \ \mapsto \ \phi(T_1,T_0)[/tex]

    where the map φ(T₁,T₀) is defined by

    [tex] \phi (T_1,T_0) \ : \ \mathbb{R^2} \ \times \ M \ \rightarrow \ \mathbb{R^2} \ | \ ( \vec{x}_0,t_0) \ \mapsto \ \phi (T_1,T_0) ( \vec{x}_0,t_0) \ = T_1 (T_0 ( \vec{x},t_0),t_1) \ = \ T_1 ( \vec{x}_1,t_1) \ = \vec{x}_2 [/tex]

    provided that:

    a) Topology: t varies continuously on M ⊆ ℝ such that T maps x₀ to T(x₀,t) = x₁ injectively,
    b) Group Theory: There is an identity for a certain t (= 0 or 1 when it makes sense), T(x₀,0) = x₀, & the operations φ & ψ interact as:
    [tex]\phi (T_1,T_0) ( \vec{x}_0,t_0) \ = T_1 (T_0 ( \vec{x},t_0),t_1) \ = \ T_0 ( \vec{x}_0,\psi(t_0,t_1)) \ = \vec{x}_2[/tex]
    c) Manifold Theory: ψ in (M,ψ) is analytic w.r.t. both arguments & each T on ℝ²×M is analytic w.r.t. t & infinitely differentiable w.r.t. x.
    Thus in this definition we have a group (M,ψ) encoded within our "one-parameter lie group of transformations" (S,φ). Note I included ℝ² in the definition (nice notation) but more generally it's for some subset of ℝⁿ. When you grasp what I've written I really encourage you to read page 36 of Bluman that I linked to just to check what I've written as he spells it out a bit more than I did. Note that my x₁ = (x₁,y₁) = (x₁(x₀,y₀,α),y₁(x₀,y₀,α)), in Emanuel he basically just says that the transformations x₁(x₀,y₀,α) & y₁(x₀,y₀,α) should form a group w.r.t. the α term & ignores a lot of the notation. This is a bit of a monster definition though, lets see how we actually use it:

    Examples of Lie Group of Transformations
    a) Translations T(x,y,ε) = (x + ε,y)
    b) Rotations T(x,y,Ө) = (xcos(Ө) - ysin(Ө),xsin(Ө) + ycos(Ө))
    c) Affine Transformations of the form T(x,y,λ) = (λx,y)
    d) Similitude Transformations T(x,y,λ) = (λx,λy)
    e) Arbitrary Examples
    T(x,y,λ) = (λx,y/λ)
    T(x,y,λ) = (λ²x,λy)
    T(x,y,λ) = (λ²x,λ²y)
    T(x,y,λ) = (x + 2λ,y + 3λ)
    T(x,y,λ) = (λx + (1 - λ)y,y)
    T(x,y,λ) = (xcosh(Ө) + ysinh(Ө),xsinh(Ө) + ycosh(Ө))
    f) Non-Examples
    T(x,y,λ) = (λ/x,y)
    g) Re-Parametrizations
    λ = sin(Ө) in the rotation gives T(x,y,λ) = (x√(1 - λ²) - λy,λx + y√(1 - λ²)) etc...

    But how do we show that any of these are lie groups of transformations? The quick way is to just look at what you're given & verify the λ term turns everything into a group under compositions (the rotation example is a good one to work out on pen & paper to see this explicitly!). Being a bit more careful, I'd use the a), b), c)'s:
    a) Define (M,ψ) to be a group in such a way that that T (in say T(x,y,λ) = (λx,y/λ) or T(x,y,ε) = (x + ε,y)) makes sense, is continuous & is injective w.r.t. t, (thus (M,ψ) in T(x,y,λ) = (λx,y/λ) couldn't be (ℝ,+) here since we'd have division by zero whereas in T(x,y,ε) = (x + ε,y) it could be ℝ!
    b) Define your identity (T(x,y,1) = (1x,y/1) = (x,y)& T(x,y,0) = (x + 0,y) = (x,y)) & ensure the whole T(x,ψ(δ,ε)) = T(T(x,δ),ε) axioms holds
    T((x,y),+(δ,ε)) = (x + δ + ε,y) = (T(x + δ,y),ε) = (T(T((x,y),δ),ε)
    c) I'm not really sure yet, I think this is just part of the construction to ensure smoothness etc... Come back to it (not even referred to in any of the examples I've seen but I'm sure we'll find a serious use for it).

    Theory of Lie Group of Transformations
    The main theoretical tool I can gather at this stage is the infinitesimal transformation & it's consequences which I'll explain soon, however in Emanuel there's a nice proof of something Cohen just states with examples like the one I gave above in g) Re-Parametrizations.
    Basically if we have an injective coordinate transformation F(x₀,y₀) = (u,v) = (u(x₀,y₀),v(x₀,y₀)) we can invert to get (x₀,y₀) = F-¹(u,v) = (x₀(u,v),y₀(u,v)) then

    [tex] (x_1,y_1) \ = T(x_0,y_0,\alpha) \ = \ T(x_0(u,v),y_0(u,v),\alpha) \ = \ T'(u,v,\alpha)[/tex]

    which implies that the group methods we'll be using to solve ode's will be coordinate independent!

    Interpretation of Lie Group of Transformations
    Cohen gives a geometric interpretation by talking about the transformations T as transforming points (x₀,y₀) to other points (x₁,y₁) along some curve (due to continuity of α in T(x,y,α)!) thus we span out a 'path-curve' of the group (M,ψ). In other words, as α varies T transforms points along a curve to other points on that curve, hence the name "point transformation" is sometimes used in this context (e.g. Emanuel) since we're just transforming points to other points on the same curve. For an illustration of me talking about this classical way of doing things as being an exercise in notation when translating to the modern context - check out the Bluman link I gave, end of page 36 & the picture on page 37, to see this classical explanation re-interpreted in terms of flows... This implies that we're working with a parametric representation of some curve (!!!) & thus if we eliminate the parameter we get our original curve (that Rota essay ringing a bell?).

    I'll get to the infinitesimal transformations as soon as I can.
     
    Last edited: Jun 19, 2013
  19. Jun 23, 2013 #18

    Stephen Tashi

    User Avatar
    Science Advisor

    Solution of ODEs by Continuous Groups by George Emmanuel

    Chapter 2 Continuous One-Parameter Groups-I

    Meditation 4 continued: Group Concept

    My explanation of groups being isomorphic got so out of hand that to avoid cluttering this thread I posted it as 3 messages in another section of the forum! https://www.physicsforums.com/showthread.php?p=4424380&posted=1#post4424380
     
  20. Jun 25, 2013 #19

    Stephen Tashi

    User Avatar
    Science Advisor

    Solution of ODEs by Continuous Groups by George Emmanuel

    Chapter 2 Continuous One-Parameter Groups-I

    Meditation 5 Group Concept - continuous transformation groups - their notation


    I don't like the notation used by Emmanuel. It's apparently the traditional way to do things, but as an exercise for my own benefit, I'm going to use subscripts to indicate whether things apply to the [itex] x [/itex] or [itex] y [/itex] coordinate instead of using different Greek letters for each.

    The groups considered in this chapter are some set of functions that are 1-to-1 mappings of the plane onto itself. They won't be set of all such functions; .they will be special subsets of it. To completely describe such a function [itex] T [/itex] we will need two real valued functions, one to describe how it maps the x-coordinate and one to describe how it maps the y-coordinate. For the time being, I'll represent this as [itex]T(x,y) = (\ T_x(x,y),\ T_y(x,y)\ ) [/itex].

    It's tempting to call [itex] T [/itex] a vector valued function of a vector. Technically, a pair of coordinates is not necessarily a vector, so I won't write [itex] T(x,y) [/itex] as [itex]\vec{T}(x,y) [/itex] or [itex]\vec{T}(\vec{p})[/itex]. You'll just have to remember that [itex] T [/itex] is a pair of functions, one for each coordinate.

    An example Emmanuel uses is the group [itex] S [/itex] of all functions that rotate the points in the plane about the origin. (In group theory texts, this group is called SO(2), pronounced "ess-oh-two" or "the special orthogonal group in two dimensions"). The group operation is the composition of functions. The composition of two rotation functions is a rotation function. By saying that [itex]S[/itex] is the group of all rotations of the plane about the origin, we take care of "closed" and "identity" and "inverse". (We regard the identity function as a rotaton of zero degrees.). "Associative" always holds for the composition of functions.

    As an example, one element [itex] T [/itex] in the group [itex] S [/itex] is the rotation of points (counterclockwise) by the angle [itex]\frac{\pi}{4} [/itex].
    [tex] T(x,y) = (\ \cos(\frac{\pi}{4}) x - \sin(\frac{\pi}{4})y,\ \sin(\frac{\pi}{4}) x + \cos(\frac{\pi}{4})y \ ) [/tex]

    or we can represent [itex] T [/itex] as the pair of functions

    [itex] T_x(x,y) = \ \cos(\frac{\pi}{4}) x \ - \ \sin(\frac{\pi}{4})y [/itex]
    [itex] T_y(x,y) = \ \sin(\frac{\pi}{4}) x \ + \ \cos(\frac{\pi}{4})y [/itex]

    When I try to deduce the formulas for a rotation from simple geometry, I get confused. I only find simple geometric diagrams useful for determining the signs and placement of the trig functions in the formulas. given that I do remember that [itex] \sin [/itex] and [itex] \cos [/itex] are involved. It's helpful to have studied the particular kind of vector valued functions of vectors that are represented by matrices and know that rotations of a vector are given by matrices of the form:

    [tex] \begin{pmatrix} T_x \\ T_y \end{pmatrix} = \begin{pmatrix} \cos(\alpha) & -sin(\alpha) \\ \sin(\alpha) & cos(\alpha) \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} [/tex]

    The group [itex] S [/itex] has an uncountable infinity of elements. There is an element for each possible rotation angle. Let's look for a way to describe them without assigning a different letter to each individual function in the group. The natural way is to put the rotation angle [itex] \alpha [/itex] into the notation. There are two common approaches to accomplish this, indexes and coordinates. I think Emmanuel is using coordinates.

    It's an interesting digression to compare the two approaches.

    The natural notation for indexing an element of [itex] S [/itex] would be [itex] T_\alpha [/itex] to indicate the function that does a rotation by angle [itex] \alpha [/itex]. We can ignore protests from those poor lost souls who think that indexes must be integers. High class mathematicians know that a set of real numbers can also be used to index things. The requirement is that we establish a 1-to-1 function between the set used to index and the things that are indexed. (For example, people who study continuous stochastic processes that take place in time do this - whether they know it or not. The high-class definition of a stochastic process is that it is an indexed collection of (not necessarily independent) random variables. A random process in time is a collection of random variables indexed by the set of real numbers that we use for times).

    The natural notation for assigning coordinates is just to list the coordinates in parentheses. We ignore protests from poor lost souls who think that functions cannot be points. High class mathematicians know that anything can be considered a point in some space.

    The indexing method requires that an element of the group have 1 and only 1 index. By contrast, in coordinate systems, the same "point" can have several different coordinates. ( For example, for points in polar coordinates, [itex] (r,\theta) = (r, \theta + 2 \pi) = (r,\theta + 4\pi) [/itex]. In the example of the group [itex]S[/itex], Emmanuel uses expressions like [itex] \alpha + \beta [/itex] when adding angles and he doesn't say anything about having to modify the result so it lies in interval [itex] [0,2\pi) [/itex]. So I think he's using coordinates, not indexes.

    The groups in this chapter are "1-paraamter groups". We will consider them as points in a 1-dimensonal space so they have 1 coordinate. (I'm going to call the "parameter" the "coordinate".) The usual way to denote a "point" with a 1-dimensional coordinate is just to write a variable representing that number. If we did that, a function in the group [itex] S [/itex] with coordinate [itex] \alpha [/itex] would be be denoted by [itex]\alpha =(\alpha_x(x,y),\alpha_y(x,y) ) [/itex]. Emmanuel prefers to put the coordinate of the function in the argument list with [itex] (x,y) [/itex] So a function gets to have both a name like [itex]T [/itex] and a coordinate like [itex] \alpha [/itex].

    I'll go along with that, and the full notation for a function [itex]T [/itex] in [itex]S[/itex] will be:

    [tex] T(x,y,\alpha) = ( \ T_x(x,y,\alpha),\ T_y(x,y,\alpha) \ )[/tex].

    The fact that a function in [itex] S [/itex] is denoted by both a name and a coordinate can create some minor confusion. For example, consider the typical math-sounding phrase "Let [itex] T(x,y,\alpha)[/itex] and [itex] W(x,y,\alpha) [/itex] be two functions in the group [itex] S [/itex] ....". The two functions are actually the same function because they have the same coordinate [itex]\alpha [/itex].

    The group operation is, of course, composition of functions. If we didn't have to worry about the coordinates of functions, we'd be in familiar territory. For example if [itex] T [/itex] and [itex] W [/itex] are two functions in [itex]S [/itex] then since the group operation of "multiplication" is defined by the composition of functions:

    [itex] V = (T)(W) [/itex] (here the product notation means the group operation)

    [tex] = (\ T_x( W_x(x,y), W_y(x,y)),\ T_y(W_x(x,y),W_y(x,y))\ ) [/tex].

    That's like what you see when you compose 2-D vector-valued functions of 2-D vectors.

    But when we write all our functions with the family name [itex]"T" [/itex], they are distinguished only by their coordinate. So we must compute compositions of functions like [itex] T(x,y,\alpha) [/itex] and [itex] T(x,y,\beta) [/itex].

    One notation for a composition is

    [itex] T(x,y,\theta) = T(x,y,\alpha) T(x,y,\beta) [/itex] (indicating the group operation)

    [itex] = (\ T_x(T_x(x,y,\beta),T_y(x,y,\beta)),\alpha),\ T_y(T_x(x,y,\beta),T_y(x,y,\beta),\alpha) \ ) [/itex]

    It's slightly easier on the eyes to write the info for each result as separate equation:

    [itex] T_x(x,y,\theta) = \ T_x(\ T_x(x,y,\beta), \ T_y(x,y,\beta),\alpha) [/itex]
    [itex] T_y(x,y,\theta) = \ T_y(\ T_x(x,y,\beta), \ T_y(x,y,\beta),\alpha) [/itex]

    That notation will pass muster in a class of students who are already lost. However, suppose someone asks "How do you find [itex] \theta [/itex]?"

    In the concrete example at hand, we have a group of rotations. If you apply a rotation of angle [itex] \beta [/itex] and then apply a rotation of [itex] \alpha [/itex] this amounts to applying a rotation of [itex]\alpha + \beta [/itex]. So, in the example at hand [itex] \theta = \alpha + \beta [/itex].

    Now suppose we are in the general situation and the group of functions isn't known to be rotations. What is the "honest" math notation? The coordinate [itex] \theta [/itex] of the product is a function of the information in the factors, so we should write it as a function [itex] \theta(....)[/itex]. What should the arguments of that function be?

    The arguments of the function [itex] \theta(....) [/itex] should not include any variables that denote points on the 2-D plane. This is because [itex]\theta [/itex] is supposed to be a function that results from composing two other functions and there is nothing in that thought that says we only compose them at a particular location [itex] (x,y) [/itex]. The parameters [itex] \alpha, \beta [/itex] are the designations of two elements in the group and (by analogy to the multiplication table for a finite group) the designation of the result is only a function of the designations of the two elements of the group that are the factors. So we should write [itex] \theta(\alpha,\beta) [/itex].

    In the examples in this chapter [itex] \theta(\alpha,\beta) = \alpha + \beta [/itex].

    Another special property of the examples in this chapter is that the coordinate of the identity function is always zero, i.e. [itex] T_x (x,y,0) = x, \ T_y(x,y,0) = y [/itex].

    I haven't dug out my copy of Cohen's book. As I recall, he goes into these matters in detail. In Emmauel's presentation, a "1-parameter continuous group" is simply a group of functions that map the 2D plane onto itself. He doesn't restrict these transformations to be nice in any way. (Think about how mathematicians can invent all sorts of crazy functions to disturb people.) I think Cohen has a more restrictive definition. From that definition, he shows (as I recall) that one can always assign the coordinates for the functions in the group in such a way so that [itex] \theta(\alpha,\beta) = \alpha + \beta [/itex]. I found that very counter intuitive. If he gave a proof, I got lost in the Greek letters.

    Both Cohen and Emmanuel adopt the convention that [itex] T(x,y,\alpha) [/itex] will be the identity function when [itex] \alpha = 0 [/itex]. It isn't controversial that this can be arranged. If you had assigned coordinates so that [itex] T(x,y,35.2) [/itex] was the identity function, you could make a new assignment of coordinates by subtracting [itex] 35.2 [/itex] from the original coordinates assignments.

    Let's look briefly at a 2-parameter group of transformations. Why? Because I can read the mind of people who think like physicists. Perhaps people who think that way didn't read past the place where I called the parameter of the group a "coordinate". They already have in mind that the parameter of the group is "time" and that as you vary "time" [itex]\alpha[/itex], the function [itex] T(x,y,\alpha) [/itex] is just a way to generate a position vector that starts at [itex] (x,y) [/itex] at time 0 and moves elsewhere as time progresses. That view needs a slight modification.

    Let [itex] D [/itex] be the group whose elements are all functions that map the 2D plane onto itself by translating each point a given distance in a given direction. Also include the identity transformation as one of them.. There are various ways to designate the elements of this group with coordinates. One could adopt a polar coordinate style scheme using the magnitude and direction of the displacement. It seems simplest to use cartesian style coordinates [itex] (\alpha_x,\alpha_y) [/itex] where each coordinate gives the displacement the function makes in the respective coordinate

    If we must write that out, let's do it as 2 coordinate equations:

    [itex] T_x(x,y,\alpha_x,\alpha_y) = x + \alpha_x [/itex].
    [itex] T_y(x,y,\alpha_x,\alpha_y) = y + \alpha_y [/itex]

    This method assigning coordinates makes [itex] T(x,y,0,0) [/itex] the identity function.

    The group operation is still denoted as multiplication and implemented as the composition of functions. I won't write out an example of that in detail. I will write down the shorthand notation for it where we use a symbol like [itex] T(x,y,\alpha_x,\alpha_y) [/itex] to stand for a pair of coordinate functions and multiplication to stand for the operation of composing the functions.


    [itex] T(x,y,\theta_x,\theta_y) = T(x,y,\alpha_x,\alpha_y) T(x,y,\beta_x,\beta_y) [/itex]

    Again the question arises, what are the arguments of the [itex] \theta[/itex]'s ? In this particular example [itex] \theta_x = \alpha_x + \beta_x , \ \theta_y = \alpha_y + \beta_y [/itex].

    However in the general case they must be written as:

    [itex] \theta_x(\alpha_x,\alpha_y,\beta_x,\beta_y),\ \theta_y(\alpha_x,\alpha_y,\beta_x,\beta_y) [/itex]

    This is because you can't determine the result unless you specific the particular functions involved in the group operation and you need two coordinates per function to specify them precisely.

    I haven't peeked at anything about 2-parameter groups yet, so I'm really curious if there is a theorem that says you can always assign coordinates so the functions [itex] \theta_x, \theta_y [/itex] have a simple form.
     
    Last edited: Jun 26, 2013
  21. Jun 28, 2013 #20

    Stephen Tashi

    User Avatar
    Science Advisor

    Solution of ODEs by Continuous Groups by George Emmanuel

    Chapter 2 Continuous One-Parameter Groups-I

    Meditation 6. Context For The "symbol of the infinitesimal transformation"


    Emmanuel defines an infinitesimal transformation to be an expression that has some Leibnitizian "[itex] \delta[/itex]'s that are sitting by themselves. The only "infinitesimal" things he clearly defines are the "infinitesimal elements" and the "symbol of the infinitesimal transformation", which is a differential operator. So I'll deal with "symbol of the infinitesimal transformation" before worrying about the infinitesimal transformation. Before dealing with the "symbol of the infinitesimal transformation" itself, I'll devote this post to establishing the context for it.

    Taking a general view of the subject of this section, a group [itex] G [/itex] that has been defined as set of functions on one set [itex] \Omega [/itex] can often be regarded (simultaneously) as a set of functions on a completely different set [itex] \Psi [/itex]. It's useful to have a definition expressing the idea that the functions of [itex] G [/itex] have an orderly behavior as functions on another space [itex] \Psi [/itex] but don't necessarily form a group as functions on [itex] \Psi [/itex]. One definition that expresses this idea is the definition of a "group action" on a set [itex] \Psi [/itex].

    I won't try to explain the formalities of a "group action" in this post. (I myself would need to review them!) I'll only describe a simple way to regard a groups of functions that are defined as mappings of the plane onto itself as also being functions that map real valued functions on the plane to other real valued functions.

    As usual, let [itex] G [/itex] be a 1-parameter group of functions that map the 2-D plane to itself. Let [itex] F(x,y) [/itex] be a real valued function on the plane. ([itex]F(x,y) [/itex] is not an element of [itex] G [/itex]. The function [itex] F [/itex] maps a pair of coordinates to a single real number, not to a pair of numbers.) You can imagine [itex] F(x,y) [/itex] displayed as surface above the xy-plane in 3D by setting [itex] z = F(x,y) [/itex]. A function [itex] g [/itex] that is an element of the group [itex] G [/itex] maps points [itex] (x,y) [/itex] to different points. We can visualize the result is that the surface of [itex] F(x,y) [/itex] is also moved along with the points. So [itex] g [/itex] maps [itex] F(x,y) [/itex] to a different function.

    Let [itex] \Psi [/itex] be the set of all real valued functions on the plane. There is nothing in the definition of [itex]G [/itex] that says we must also regard an element of G as a function that maps [itex]\Psi[/itex] into itself. But, since definitions in mathematics are arbitrary, we may define a way to associates each element of [itex] G [/itex] with a functions that does that.

    Since several types of functions are being discussed here, It may help if I start calling the elements of [itex] G [/itex] "transformations" instead of "functions". There is no difference in what the two words mean, but "transformations" reminds us that they are functions that map the plane to itself.

    To give a definition in precise terms, let's first use the notation for 1-parameter transformations [itex] T(x,y,\alpha) [/itex] that doesn't list both its coordinate functions. We define how [itex] T(x,y,\alpha) [/itex] maps the function [itex] F(x,y) [/itex] to another function by saying that it sends [itex] F(x,y) [/itex] to the "new" function [itex] G(x,y) = F( T(x,y,\alpha) )[/itex].

    If we want to show to details, we make the definition that the transformation
    [itex] T(x,y,\alpha) = (\ T_x(x,y,\alpha),\ T_y(x,y,\alpha) \ ) [/itex]
    "acts" to map the function [itex] F(x,y) [/itex] to the function [itex] G(x,y) [/itex] given by
    [itex] G(x,y) = F( T_x(x,y,\alpha), T_y(x,y,\alpha) )[/itex]

    To relate this to the book, on page 13 section 2.3 Global Group Equations, Emmanuel considers a function denoted by [itex] f(x_1,y_1) [/itex]. This amounts to the same thing as the function [itex] F( T_x,(x,y,\alpha),T_y(x,y,\alpha) ) [/itex] since the coordinates [itex] (x_1,y_1) [/itex] are understood to be the result of transforming the point [itex] (x,y) [/itex] by a 1-parameter transformation.

    Let's do some examples of transformations "acting" on functions. I'll use lowercase letters for the real valued functions. Even though I haven't defined an "action", I'll use some notation for it, which employs a period ".". The notation [itex] g(x,y) = f(x,y).T(x,y,\alpha)) [/itex] indicates that the one 1-parameter transformation [itex] T(x,y,\alpha) [/itex] "acts" to map the function [itex] f(x,y) [/itex] to the function [itex] g(x,y) [/itex]. (It might seem more natural to write transformation "[itex]T[/itex]" on the left hand side of the function [itex] f[/itex]. I may explain in a later post why it's better to write it on the right side.)

    Example 6.1 [itex] f(x,y) = 3x + y^2 [/itex] Let [itex] T(x,y,\alpha)[/itex] be an element of the rotation group [itex]S [/itex] defined in a previous post.

    [itex] g(x,y) = f(x,y).T(x,y,\alpha) = f ( T_x(x,y,\alpha),T_y(x,y,\alpha)) [/itex]
    [itex] = 3(T_x(x,y,\alpha)) + ( T_y(x,y,\alpha))^2 [/itex]
    [itex] = 3 ( x \cos(\alpha) - y \sin(\alpha)) + ( x \sin(\alpha) + y \cos(\alpha) )^2 [/itex]
    [itex] = 3x \cos(\alpha) - 3y \sin(\alpha) + x^2 \sin^2(\alpha) + y^2 \cos^2(\alpha) + 2xy \sin(\alpha)\cos(\alpha) [/itex]

    In the above example, it may seem that a "simple" 2 variable polynomial function [itex] f [/itex] has been mapped to a complicated trig function. However, keep in mind that [itex] \alpha [/itex] is a constant because we are looking at what a particular element of the group [itex] S [/itex] does. So the messy looking [itex] g(x,y) [/itex] is also a polynomial function because the terms involving [itex] \alpha [/itex] are constants.


    Two simple, yet important examples:

    Example 6.2 [itex] f(x,y) = x [/itex]

    [itex] f(x,y).T(x,y,\alpha) = f ( T_x(x,y,\alpha),T_y(x,y,\alpha) = T_x(x,y,\alpha) [/itex]


    Example 6.3 [itex] f(x,y) = y [/itex]

    [itex] f(x,y).T(x,y,\alpha) = f ( T_x(x,y,\alpha),T_y(x,y,\alpha) = T_y(x,y,\alpha) [/itex]


    Example 6.4: Let [itex] T [/itex] be an element of the rotation group [itex] S [/itex]. The elements of [itex] S [/itex] move a point [itex] (x,y) [/itex] to another point that is the same distance from the origin. So we would expect a real valued function [itex] f(x,y) [/itex] that takes constant values on circles about the origin will be "transformed into itself" by [itex] T [/itex].

    [itex] f(x,y ) = x^2 + y^2 [/itex] is such a function.

    [itex] g(x,y) = f(x,y).T(x,y,\alpha) = f ( T_x(x,y,\alpha),T_y(x,y,\alpha)) [/itex]
    [itex] = (T_x(x,y,\alpha))^2 + (T_y(x,y,\alpha))^2 [/itex]
    [itex] = ( x \cos(\alpha) - y \sin(\alpha) )^2 + ( x \sin(\alpha) + y \cos(\alpha) )^2 [/itex]
    [itex] = x^2 \cos^2(\alpha) + y^2 \sin^2(\alpha) -2xy \cos(\alpha)\sin(\alpha)[/itex]
    [itex] \ \ + x^2\sin^2(\alpha) + y^2 \cos^2(\alpha) + 2xy \sin(\alpha) \cos(\alpha) [/itex]
    [itex] = x^2 \cos^2(\alpha) + x^2\sin^2(\alpha) + y^2 \sin^2(\alpha) + y^2 \cos^2(\alpha) [/itex]
    [itex] = x^2 (\cos^2(\alpha) + \sin^2(\alpha)) + y^2 ( \sin^2(\alpha) + \cos^2(\alpha)) [/itex]
    [itex] = x^2(1) + y^2(1) = x^2 + y^2[/itex]


    The general idea of "invariants" is important in mathematics and physics so I suspect functions that are "invariant" under all transformations of a particular group (as in the above example) are important.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook