B Getting from complex domain to real domain

  • B
  • Thread starter Thread starter jaydnul
  • Start date Start date
  • Tags Tags
    Complex numbers
jaydnul
Messages
558
Reaction score
15
Hi!

I am ok with understanding Euler's formula and how its proven. It is basic mathematic operations that are made possible by the characteristics of i, cos, sin, and exp.

What still makes me uncomfortable is the jump we make at the very beginning or end of calculations, basically Acosx <==> Ae^(jx). The explanations are usually the "real" part of the exponential, and Euler's formula is used to help with this.

But for my complete understanding, taking the real part of something just ins't a "normal" mathematical operation if that makes sense (it is invented for dealing with complex numbers). Is there any other explaination for the transistion we make Acosx <==> Ae^(jx) and why we can do that?
 
Mathematics news on Phys.org
jaydnul said:
Hi!

I am ok with understanding Euler's formula and how its proven. It is basic mathematic operations that are made possible by the characteristics of i, cos, sin, and exp.

What still makes me uncomfortable is the jump we make at the very beginning or end of calculations, basically Acosx <==> Ae^(jx). The explanations are usually the "real" part of the exponential, and Euler's formula is used to help with this.

But for my complete understanding, taking the real part of something just ins't a "normal" mathematical operation if that makes sense (it is invented for dealing with complex numbers). Is there any other explaination for the transistion we make Acosx <==> Ae^(jx) and why we can do that?
It is linear algebra. The vectors ##\vec{1}## and ##\vec{\mathrm{i}}## are linear independent over the real numbers. That means that any real expression
$$
\alpha \vec{1} + \beta \vec{\mathrm{i}} = \alpha' \vec{1} +\beta' \vec{\mathrm{i}}
$$
implies
$$
(\alpha-\alpha')\cdot \vec{1} + (\beta-\beta')\cdot \vec{\mathrm{i}}=\vec{0}
$$
and therefore ##\alpha=\alpha' ## and ##\beta=\beta'## by linear independence.
 
Another picture of looking at the complex numbers is ##\mathbb{C}=\mathbb{R}[T]/\langle T^2-1 \rangle## which is a quotient ring of the polynomials over the real numbers in one variable ##T.## A complex number is thus a polynomial ##\alpha+\beta\cdot \vec{\mathrm{i}} =\alpha +\beta \cdot T## where we identify ##T^2## with ##-1.## Since ##0 \neq T \neq 1,## we can conclude from ##\alpha+\beta\cdot \vec{\mathrm{i}}=\alpha+\beta\cdot T=0## that ##\alpha = \beta=0.##
 
jaydnul said:
Hi!

I am ok with understanding Euler's formula and how its proven. It is basic mathematic operations that are made possible by the characteristics of i, cos, sin, and exp.
Good. That is the hard part.
jaydnul said:
What still makes me uncomfortable is the jump we make at the very beginning or end of calculations, basically Acosx <==> Ae^(jx). The explanations are usually the "real" part of the exponential, and Euler's formula is used to help with this.

But for my complete understanding, taking the real part of something just ins't a "normal" mathematical operation if that makes sense (it is invented for dealing with complex numbers).
It is very normal. If you have a point in two dimensional space, ##(x,y) \in \mathbb{R}## X ##\mathbb{R}## ,it is completely normal to look at its ##x## value. So looking at the real part of ##Ae^{(jx)} = (A\cos(x), A\sin(x))## is normal.
(The question of how and why it was invented is a historical question. It is now standard mathematics, which is all that matters for this discussion.)
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Back
Top