Is there a theory about one-sided equations ?

  • Thread starter Thread starter n_kelthuzad
  • Start date Start date
  • Tags Tags
    Theory
n_kelthuzad
Messages
26
Reaction score
0
Is there a theory about one-sided "equations"?

I am working on infinity recently. Trying to define the 'indirect' result of infinity as 'range of numbers'. So its like: if there is a set A of infinite elements, f(x)=a\wedgeb\wedgec\wedged... (a,b,c,d...\inA);
However, one cannot say a=f(x) or b=f(x) and so on.
e.g. 1 = e^2i∏ (I know the argument so don't need to remind me e^0i∏)
and 1 = e^0
both equations are true but only remains in the equation. if you pull them out:
2i∏ \neq 0 (well that should be true if i^1 have 'impact' on the real plane)
And what should be the correct way to express this?

TY,
Victor Lu, 16, BHS, CHCH, NZ
 
Mathematics news on Phys.org


n_kelthuzad said:
I am working on infinity recently. Trying to define the 'indirect' result of infinity as 'range of numbers'. So its like: if there is a set A of infinite elements, f(x)=a\wedgeb\wedgec\wedged... (a,b,c,d...\inA);
However, one cannot say a=f(x) or b=f(x) and so on.
e.g. 1 = e^2i∏ (I know the argument so don't need to remind me e^0i∏)
and 1 = e^0
both equations are true but only remains in the equation. if you pull them out:
2i∏ \neq 0 (well that should be true if i^1 have 'impact' on the real plane)
And what should be the correct way to express this?

TY,
Victor Lu, 16, BHS, CHCH, NZ

Hello n_kelthuzad

In terms of a function, a function always has the probably that it returns one value for every possible input combination.

In terms of different values producing the same function multiple times or even infinitely many times (like a sine or cosine wave) then if you want to find when this happens, if you have a function that is analytic and continuous (think f(x) can be written as some kind of series expansion that is valid across all of the domain which is usually the whole real line or x axis).

What you do is you have to break the function up into 'invertible' parts. This requires calculus and solving for when the derivative is zero. Once you do this then what you get is a function that is 'split' up into parts where you can find the inverse.

In other words if you have f(x) and one part is where a < x < b (or when x is between a and b) then what you do is you are given f(x) and you want to find x and you use more advanced mathematics to do this (usually taught in high school or university): if you want to know this I'll do my best to explain it as simply as I can.

So let's say you have say a sine-wave (which is what the e^ix basically looks like for both the real and the complex parts): you use the derivatives to get the invertible parts (for a cosine wave its between 0 and pi, for a sine wave its between -pi and pi for the first part and then it just keeps repeating in both directions forever) and then what you do is you use what is called a root finder which is programmed into a computer to get a solution for that 'part'

Now this idea will allow you to figure out exactly how many solutions your f(x) gives for the whole domain of the function (all your x's that you can use) and you can find out what x values give them. But usually what we do is for certain functions is we know (usually because of the nature of the derivatives) where we have to check and using other mathematics and computers we get the answers quickly.

The same is also true for complex functions, but you need to use complex calculus and not real calculus (calculus that works no complex numbers), but the idea is the same.

So to express this (if the above is the right idea) we do this:

Let f(x) = a for some known f(x) where a is a constant and we will assume that it is valud (in other words there is at least one value of x where f(x) = a).

Then what we do is find x0, x1, x2, ..., xn, ... where we have

f(xi) = a for all xi where x i=0 is x0, x i= 1 = x1 and so on.

The above is how we say that f(x) = a has the solutions given by the values x0,x1,x2, all the way up to the number of solutions (which might be infinite!). Just to clarify, this means:

f(x0) = a
f(x1) = a
f(x2) = a
f(x3) = a

and so on and we always assume that every x0, x1, and so on is always different from the rest.
 


Yes thank you Chiro. However I know this and this is not what I meant to find out.

What I am trying to do is to define the 'undefined' for things like 0/0 and ∞/∞ in the real projective plane.
I recall that 'undefined' A , and A has these properties:
A \bigcap R, and every number in R suits A, however selecting a paticular number in R does not make A true.
e.g. (I use '=' here for only the left side equals the right side)
0/0 '=' 6
6 '≠' 0/0
I think this has not much to do with calculus or complex analysis.
 


n_kelthuzad said:
Yes thank you Chiro. However I know this and this is not what I meant to find out.

What I am trying to do is to define the 'undefined' for things like 0/0 and ∞/∞ in the real projective plane.
I recall that 'undefined' A , and A has these properties:
A \bigcap R, and every number in R suits A, however selecting a paticular number in R does not make A true.
e.g. (I use '=' here for only the left side equals the right side)
0/0 '=' 6
6 '≠' 0/0
I think this has not much to do with calculus or complex analysis.

Well if you wanted to do the kind of thing you are saying you would need to come up with not only a representation but also an arithmetic that allows you to do what you are trying to do.

In terms of complex analysis, what is done in a Riemann sphere is that we have a special point at the north pole of the sphere which basically translates to 'infinity'. You might want to look into this in how infinity is factored in this way:

http://en.wikipedia.org/wiki/Riemann_sphere

But ultimately the reason for why we can't define 0/0 goes back to the algebra that we use commonly which shows that if you define 0/0 then you get all kinds of weird things.

What you will want to do is come up with a new structure and a new algebra that has your properties. This means that you will need to rethink what your algebra actually is so that it has your property.

I should caution you however that you will have to 'make sense' at some point of your algebra and how you can do useful things with it like we do useful things with normal algebra and if you can't do this, then you will need to think about why you are doing this.

Are you just trying to get a mathematical system that allows this behaviour in terms of algebra or is there another motive at hand?
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top