Dr Anderson and nullity

  • Thread starter yuiop
  • Start date
3,961
20

Main Question or Discussion Point

Dr Anderson and "nullity"

When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.

Wikipedia article: http://en.wikipedia.org/wiki/James_Anderson_(computer_scientist)#cite_note-perplex9-10

Paper by Dr Anderson: http://www.bookofparagon.com/Mathematics/PerspexMachineIX.pdf

The Wikipedia article states that he discovered nothing more the symbol "NAN" used in computer IEEE arithmetic to represent a "Not a number" error for division by zero. This is true for Dr Anderson's ideas in their raw form and he does not really specify what to do with nullity once he has obtained it, so it still essentially undefined.

I would like to present some arguments that Dr Anderson might be on to something, but it requires modifying his ideas a little. I hope there are interesting ideas here and that the ideas I present will not be instantly dismissed without some thought given to the details I present.

In his paper Anderson clearly states that 0/0 = is not equal to 1.

I would like to modify that so x takes on the value 1 when x := 0/0, where a := b means a is assigned the value of b rather than usual a=b which means a is equivalent to b in normal algebra. Most computer languages make the distinction between the meanings of "assign" and "equals".

Now here is an example (and I will follow with a counter proof)

Lets say a computer programmer has written this pseudo code:

(1) [tex] x := a . b \left({b+c \over b . d}\right) [/tex]

A computer would immediately stop at this point and issue a "division by zero error" or a NAN error when b is zero.

Now if expression (1) had been written as the equivalent expression

(2) [tex] x:= {a . b +a . c \over d} [/tex]

then the computer would have got the correct answer x := ac/d when b = 0.

Now expression (1) is sloppy but humans are not perfect and b might even be the result of the two other variables such as b=(e-f) and the obvious division by zero error might not have been picked up during testing.

Now if the computer was programmed to fully evaluate all terms within expression (1) it might come up with something like this:

(3) [tex] x := {a . 0^{(2)} \over d. 0} + {a . c . 0 \over d.0 } := {a . 0^{(1)} \over d} + {a.c . 0^{(0) } \over d} [/tex]

Now if the computer is programmed to evaluate 0^(0) as 1 and 0^(n) as 0 for any non-zero value of n, then it would not have "crashed" and would have got the correct answer x:= ac/d when b=0.


Now for the counter proof:

Lets say we have the expression 0 = x.0

Now lets say we know for a fact x has the value 42, then when we divide both sides of the expression 0 = 42 . 0 by zero we get

0/0 = 42 . 0/0

so 1 = 42 if we assume 0/0 = 1 which is obviously NOT TRUE!

The difference here from the earlier argument is that we are using the algebraic expression a = b where the symbol "=" means "equivalent" rather than the computer expression a := b where the symbol ":=" means "assign the value of the expression on the right to the variable on the left"

In normal algebra equations using "=" rather than ":=" then division by zero is still forbiden.

For example if we ask the question, "what is the value of x?" in the equation 0 = x . 0 then x is an unknown value and we would we express this as x := 0/0 in computer language to get x := 1.

This is made clearer if we express the equation completely in terms of variables then it becomes y = x/y and then x := y/y which gives x:= 1 for all y.

Now if we "know" the value of x is 42 then the expression becomes z := 42 . y which means assign the value of 42 . y to z which when y is zero means z := 0

-----------------------------------------------------------------------------

In a calculator or computer there are precedence rules such that multiplication is evaluated before addition in an expression.The computer program would have an additional precedence rule that multiplication and division by zero are evaluated after multiplication by normal numbers within each term and before adding terms. Care has to be taken to sum the powers of zero because as I hope was made clear above 0*0 or 0^2 is not always the same as 0.

Within these rules [itex]n^1[/itex] is always n and [itex] n^0 [/itex] is always 1 for any n including n = 0, which is not true in IEEE arithmetic or ordinary algebra.

In IEEE arithmetic x := n/0 := infinity for n>0 and x := n/0 := NAN for n=0.
Here, x := n/0 := infinity for n>0 and n/0 = 1 for n=0.

In IEEE arithmetic x := n/0 . 0 := NAN for any n.
Here, x := n/0 . 0 := n for any n.

If n/0 . 0 is evaluated in two stages on different lines of the program then there is a problem because at the first stage it might get (infinity) and then get (infinity).0 = 1 at the second stage. A possible "work around", might be to store the intermediate result as n/0 internally (flagging up a non fatal "potential division by zero error" warning) and proceed until the complete expression is evaluated clearing the error.

This is not meant to be a complete final or formal "proof" but just food for thought.

Further sophistications might include applying L'Hopitals rule when indeterminate results are discovered to provide a more robust operating system that is not crashed by a simple equation like equation (1) of this discussion.
 
Last edited:

Answers and Replies

42
0
interesting...

Well, I have always felt uneasy with saying "you just can't divide zero by zero, it doesn't mean anything" or whatever. I think what is interesting here is, if I understand you correctly, you are making a distinction between simply dividing by the number zero, and dividing by a variable whose value happens to "become" zero i.e. it works when other values are put into the variable, so you define it at zero as to make it work. It kind of reminds me of using a x = a limiting sequence of rationals approaching Pi of 2^x to define 2^Pi. I'm thinking that this way of redefining x/x where x is zero might depend on the continuity of the rest of the equation that x/x is imbedded in, or maybe this is all a crazy idea and I'm just rambling.....

Well, good luck with the theory, I'm going to go work on my own theory now.....

approx
 
Last edited:
arildno
Science Advisor
Homework Helper
Gold Member
Dearly Missed
9,946
130
When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.
As he should be. There is nothing worthy to be discussed here.
 
cristo
Staff Emeritus
Science Advisor
8,056
72
This has also been discussed to death here before (probably when he was first on the news!) You should do a search and have a read of that thread.
 
CRGreathouse
Science Advisor
Homework Helper
2,817
0
Lets say a computer programmer has written this pseudo code:

(1) [tex] x := a . b \left({b+c \over b . d}\right) [/tex]

A computer would immediately stop at this point and issue a "division by zero error" or a NAN error when b is zero.

Now if expression (1) had been written as the equivalent expression

(2) [tex] x:= {a . b +a . c \over d} [/tex]

then the computer would have got the correct answer x := ac/d when b = 0.

Now expression (1) is sloppy but humans are not perfect and b might even be the result of the two other variables such as b=(e-f) and the obvious division by zero error might not have been picked up during testing.

Now if the computer was programmed to fully evaluate all terms within expression (1) it might come up with something like this:

(3) [tex] x := {a . 0^{(2)} \over d. 0} + {a . c . 0 \over d.0 } := {a . 0^{(1)} \over d} + {a.c . 0^{(0) } \over d} [/tex]

Now if the computer is programmed to evaluate 0^(0) as 1 and 0^(n) as 0 for any non-zero value of n, then it would not have "crashed" and would have got the correct answer x:= ac/d when b=0.
Your suggestion, then, is to have computers symbolically evaluate expressions? This seems to have little in common with Dr. Anderson's oddball suggestion of 'nullity'.

An expression like
[tex] x := a . b \left({b+c \over b . d}\right) [/tex]
would conventionally be calculated as
[tex]t_0 := b+c[/tex]
[tex]t_1 := bd[/tex]
[tex]t_2 := t_0/t_1[/tex]
[tex]t_3 := ab[/tex]
[tex]x := t_3t_2[/tex]
(using juxtaposition, as usual, for multiplication). [itex]t_1[/itex] would evaluate to 0 long before there would be an opportunity for canceling it.
 
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,843
17
When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.
He made a fool of himself through the claims he made and the hype he stirred up -- the derison was well-earned.
 
Last edited:
84
0
To do a less mathematical look at it, I have a program aid my poker playing, it shows me
for example, how often a player raises preflop, typically a very aggressive player might raise
preflop 20% of the time, a passive player might raise less then 5% of the time.
I get this value by (number of preflop raises)/(number of games played), now if a player
has not played before this would come out as 100% or extremely aggressive if 0/0=1.
So NaN is appropiate but undefined would make more sense (to me) in my example.
So in my example 0/0 = 'no idea how the agressive the player is', which is not very helpful
when he has just pushed all his chips into the pot!!
 
3,961
20
Your suggestion, then, is to have computers symbolically evaluate expressions? This seems to have little in common with Dr. Anderson's oddball suggestion of 'nullity'.

An expression like
[tex] x := a . b \left({b+c \over b . d}\right) [/tex]
would conventionally be calculated as
[tex]t_0 := b+c[/tex]
[tex]t_1 := bd[/tex]
[tex]t_2 := t_0/t_1[/tex]
[tex]t_3 := ab[/tex]
[tex]x := t_3t_2[/tex]
(using juxtaposition, as usual, for multiplication). [itex]t_1[/itex] would evaluate to 0 long before there would be an opportunity for canceling it.
Well, as mentioned before the rules of precedence would have to modifed slightly. I picture the evaluation being carried out more like this:

[tex] x := a . b \left({b+c \over b . d}\right) [/tex]
would (un)conventionally be calculated with b=0 as
[tex]t_0 := b+c := 0+c := c[/tex]
[tex]t_1 := bd = 0 . d [/tex] (the zero is maintained like the imaginary part of a complex number)
[tex]t_2 := t_0/t_1 := c/(0.d) := e/0[/tex] where e = c/d
[tex]t_3 := ab := a.0[/tex]
[tex]x := t_3.t_2 := a.e.0^0 := ae[/tex]

This is a slightly improved version of the method I showed in the first post that is more consistant with new examples I have tested it on.
 
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,843
17
As others have said, your proposal doesn't look anything like Dr. Andersons.

How would your proposal attempt to evaluate

(b + b) / b

or

(b - b) / b

when b = 0? And what about

(b - b) / (b - 1)

when b = 1?
 
CRGreathouse
Science Advisor
Homework Helper
2,817
0
kev, don't get me wrong -- I think it's at least interesting to consider such a system. I've been trying to make it work in my head: having 'shadow values' representing what something was before it turned zero, plus an index of zero-ness (number of times you can divide out zero before getting the 'shadow value'). But I'm not convinced that they work as intended, nor that they're useful in general.

But there are some concerns:
1. You shouldn't compare your system to Dr. Anderson's; they're not related, mechanically or philosophically.
2. You need to show how your system works in general, and that your system works as intended on interesting examples.
3. You should show what properties your system has. IEEE floating-point is commutative but not associative, for example; is yours either? (You can exclude, for a moment, zero-overflow.)
4. It would be interesting to comment on software and hardware efficiency. I could imagine this running 2-4 times slower than normal math with software emulation... is that how you see it?
 

Related Threads for: Dr Anderson and nullity

  • Last Post
Replies
2
Views
1K
Replies
3
Views
3K
Replies
1
Views
2K
Top