What Are Examples of Non-Linear Operators in Mathematics?

mikeeey
Messages
57
Reaction score
0
Hello every one .
If the derivative is a linear operator ( linear map )
Then what is the example of non-linear operator

Thanks .
 
Physics news on Phys.org
##f(x) = x^2## for ##x\in \mathbb{R}##.
 
My point is , if linear algebra deals with vector spaces and linear maps , then what does nonlinear algebra deal with ( only nonlinear functions with nonlinear equation ) nonlinear maps ( transformations ) ?

Thanks
 
All linear (affine) maps from ##\mathbb R ## to itself are of the form ## f(x)=ax+b ##, so any map that does not look like this is not linear (affine). But that is a good question. And I think it is more accurate to say that the _differential_ is a linear map. And there s such a thing as linear maps defined on modules, and maybe other objects.
 
Last edited:
mikeeey said:
My point is , if linear algebra deals with vector spaces and linear maps , then what does nonlinear algebra deal with ( only nonlinear functions with nonlinear equation ) nonlinear maps ( transformations ) ?
I don't think there is a branch of mathematics called non-linear algebra. Linear algebra can be considered a subset of abstract algebra, but this isn't apparent from books on abstract algebra, because they focus on those parts of abstract algebra that aren't linear algebra.

Things that involve vector spaces but require techniques from topology are usually considered functional analysis rather than linear algebra.

I'm not sure what label is appropriate for the topic of non-linear maps between vector spaces. Some non-linear maps, like affine maps T(ax+by)=aT(x)+bT(y)-T(0), antilinear maps T(ax+by)=a*T(x)+b*T(y) and multilinear maps T(aw+bx,cy+dz)=acT(w,y)+adT(w,z)+bcT(x,y)+bdT(x,z) are similar enough to linear maps that I wouldn't hesitate to consider them part of linear algebra.

Actually, now that I think about it, I think I would consider arbitrary maps between vector spaces a part of linear algebra. The "linear" in linear algebra refers to the "linear structure" of the vector space, i.e. the addition and scalar multiplication operations, not to linear operators. I would expect that there isn't a whole lot we can say about arbitrary maps between vector spaces. We need to consider a smaller subset of maps (like linear, affine, antilinear or multilinear maps) to be able to say something interesting.
 
In my opinion, linear algebra is about linear maps. the vector space are merely where they take place.

as to non linear algebra, to me that would be non commutative group theory, and "commutative algebra" (polynomial maps).

There is also a linear side to "commutative algebra". I.e. although polynomial maps are not linear, the ring of all polynomials is commutative and
can be profitably considered as the coefficient ring for a "module", i.e. a commutative group with an action by that ring. Then there are linear maps
of those modules for which the rings of polynomials behave as scalars do for vector space maps.

So for me, basic algebra comes in two flavors, linear algebra (possibly generalized to modules over arbitrary commutative rings), where the fundamental tool is essentially the Euclidean algorithm, and non commutative group theory, where the basic tool is the action of the group on various objects.

Matrices give rise to interesting examples of both theories, since matrices define linear maps, but groups of matrices, such as GL(n (invertible matrices)), SO(n) (e.g. rotations),define interesting non commutative groups which act on vector spaces and on subspaces, as well as on tensor spaces.

This point of view is spelled out in this introduction to course notes on my web page: (80006 is a typo for 8000)

http://alpha.math.uga.edu/~roy/80006a.pdfanother answer (maybe more appropriate to the original question) is that, for some purposes, the most important operators are operators on function spaces, and there you have both linear and non linear differential, integral (and other, as micromass illustrated) operators.
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top