As for the original question: Is nullity a valid concept in mathematics?

  • Context: Graduate 
  • Thread starter Thread starter yuiop
  • Start date Start date
Click For Summary

Discussion Overview

The discussion centers around the concept of "nullity" as proposed by Dr. Anderson, particularly in the context of dividing zero by zero (0/0). Participants explore the implications of this concept in mathematics and computer programming, examining both theoretical and practical aspects, including potential modifications to existing mathematical frameworks and programming practices.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Dr. Anderson's assertion that 0/0 equals "nullity" has been met with skepticism, with some participants questioning the validity of this claim.
  • One participant suggests modifying Dr. Anderson's ideas to allow for the assignment of values in programming contexts, proposing that 0/0 could be treated differently under certain conditions.
  • Another participant expresses discomfort with the conventional view that dividing by zero is meaningless, suggesting that it might depend on the context in which the division occurs.
  • Concerns are raised about the implications of redefining division by zero in programming, particularly regarding how computers handle such operations and the potential for errors.
  • A counterproof is presented to illustrate the complications that arise when assuming 0/0 equals 1, emphasizing the distinction between assignment and equivalence in mathematical expressions.
  • Some participants argue that the discussion has been previously exhausted and question the relevance of revisiting the topic.

Areas of Agreement / Disagreement

Participants exhibit a range of views, with some supporting the exploration of nullity and its implications, while others dismiss the concept as unworthy of discussion. There is no consensus on the validity of Dr. Anderson's ideas or the proposed modifications.

Contextual Notes

The discussion reveals limitations in the assumptions about division by zero, particularly in the context of programming versus traditional mathematics. The implications of redefining operations involving zero remain unresolved, and the potential for errors in computational contexts is highlighted.

yuiop
Messages
3,962
Reaction score
20
Dr Anderson and "nullity"

When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.

Wikipedia article: http://en.wikipedia.org/wiki/James_Anderson_(computer_scientist)#cite_note-perplex9-10

Paper by Dr Anderson: http://www.bookofparagon.com/Mathematics/PerspexMachineIX.pdf

The Wikipedia article states that he discovered nothing more the symbol "NAN" used in computer IEEE arithmetic to represent a "Not a number" error for division by zero. This is true for Dr Anderson's ideas in their raw form and he does not really specify what to do with nullity once he has obtained it, so it still essentially undefined.

I would like to present some arguments that Dr Anderson might be on to something, but it requires modifying his ideas a little. I hope there are interesting ideas here and that the ideas I present will not be instantly dismissed without some thought given to the details I present.

In his paper Anderson clearly states that 0/0 = is not equal to 1.

I would like to modify that so x takes on the value 1 when x := 0/0, where a := b means a is assigned the value of b rather than usual a=b which means a is equivalent to b in normal algebra. Most computer languages make the distinction between the meanings of "assign" and "equals".

Now here is an example (and I will follow with a counter proof)

Lets say a computer programmer has written this pseudo code:

(1) x := a . b \left({b+c \over b . d}\right)

A computer would immediately stop at this point and issue a "division by zero error" or a NAN error when b is zero.

Now if expression (1) had been written as the equivalent expression

(2) x:= {a . b +a . c \over d}

then the computer would have got the correct answer x := ac/d when b = 0.

Now expression (1) is sloppy but humans are not perfect and b might even be the result of the two other variables such as b=(e-f) and the obvious division by zero error might not have been picked up during testing.

Now if the computer was programmed to fully evaluate all terms within expression (1) it might come up with something like this:

(3) x := {a . 0^{(2)} \over d. 0} + {a . c . 0 \over d.0 } := {a . 0^{(1)} \over d} + {a.c . 0^{(0) } \over d}

Now if the computer is programmed to evaluate 0^(0) as 1 and 0^(n) as 0 for any non-zero value of n, then it would not have "crashed" and would have got the correct answer x:= ac/d when b=0.


Now for the counter proof:

Lets say we have the expression 0 = x.0

Now let's say we know for a fact x has the value 42, then when we divide both sides of the expression 0 = 42 . 0 by zero we get

0/0 = 42 . 0/0

so 1 = 42 if we assume 0/0 = 1 which is obviously NOT TRUE!

The difference here from the earlier argument is that we are using the algebraic expression a = b where the symbol "=" means "equivalent" rather than the computer expression a := b where the symbol ":=" means "assign the value of the expression on the right to the variable on the left"

In normal algebra equations using "=" rather than ":=" then division by zero is still forbiden.

For example if we ask the question, "what is the value of x?" in the equation 0 = x . 0 then x is an unknown value and we would we express this as x := 0/0 in computer language to get x := 1.

This is made clearer if we express the equation completely in terms of variables then it becomes y = x/y and then x := y/y which gives x:= 1 for all y.

Now if we "know" the value of x is 42 then the expression becomes z := 42 . y which means assign the value of 42 . y to z which when y is zero means z := 0

-----------------------------------------------------------------------------

In a calculator or computer there are precedence rules such that multiplication is evaluated before addition in an expression.The computer program would have an additional precedence rule that multiplication and division by zero are evaluated after multiplication by normal numbers within each term and before adding terms. Care has to be taken to sum the powers of zero because as I hope was made clear above 0*0 or 0^2 is not always the same as 0.

Within these rules n^1 is always n and n^0 is always 1 for any n including n = 0, which is not true in IEEE arithmetic or ordinary algebra.

In IEEE arithmetic x := n/0 := infinity for n>0 and x := n/0 := NAN for n=0.
Here, x := n/0 := infinity for n>0 and n/0 = 1 for n=0.

In IEEE arithmetic x := n/0 . 0 := NAN for any n.
Here, x := n/0 . 0 := n for any n.

If n/0 . 0 is evaluated in two stages on different lines of the program then there is a problem because at the first stage it might get (infinity) and then get (infinity).0 = 1 at the second stage. A possible "work around", might be to store the intermediate result as n/0 internally (flagging up a non fatal "potential division by zero error" warning) and proceed until the complete expression is evaluated clearing the error.

This is not meant to be a complete final or formal "proof" but just food for thought.

Further sophistications might include applying l'hospital's rule when indeterminate results are discovered to provide a more robust operating system that is not crashed by a simple equation like equation (1) of this discussion.
 
Last edited:
Mathematics news on Phys.org
interesting...

Well, I have always felt uneasy with saying "you just can't divide zero by zero, it doesn't mean anything" or whatever. I think what is interesting here is, if I understand you correctly, you are making a distinction between simply dividing by the number zero, and dividing by a variable whose value happens to "become" zero i.e. it works when other values are put into the variable, so you define it at zero as to make it work. It kind of reminds me of using a x = a limiting sequence of rationals approaching Pi of 2^x to define 2^Pi. I'm thinking that this way of redefining x/x where x is zero might depend on the continuity of the rest of the equation that x/x is imbedded in, or maybe this is all a crazy idea and I'm just rambling...

Well, good luck with the theory, I'm going to go work on my own theory now...

approx
 
Last edited:
kev said:
When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.

As he should be. There is nothing worthy to be discussed here.
 
This has also been discussed to death here before (probably when he was first on the news!) You should do a search and have a read of that thread.
 
kev said:
Lets say a computer programmer has written this pseudo code:

(1) x := a . b \left({b+c \over b . d}\right)

A computer would immediately stop at this point and issue a "division by zero error" or a NAN error when b is zero.

Now if expression (1) had been written as the equivalent expression

(2) x:= {a . b +a . c \over d}

then the computer would have got the correct answer x := ac/d when b = 0.

Now expression (1) is sloppy but humans are not perfect and b might even be the result of the two other variables such as b=(e-f) and the obvious division by zero error might not have been picked up during testing.

Now if the computer was programmed to fully evaluate all terms within expression (1) it might come up with something like this:

(3) x := {a . 0^{(2)} \over d. 0} + {a . c . 0 \over d.0 } := {a . 0^{(1)} \over d} + {a.c . 0^{(0) } \over d}

Now if the computer is programmed to evaluate 0^(0) as 1 and 0^(n) as 0 for any non-zero value of n, then it would not have "crashed" and would have got the correct answer x:= ac/d when b=0.

Your suggestion, then, is to have computers symbolically evaluate expressions? This seems to have little in common with Dr. Anderson's oddball suggestion of 'nullity'.

An expression like
x := a . b \left({b+c \over b . d}\right)
would conventionally be calculated as
t_0 := b+c
t_1 := bd
t_2 := t_0/t_1
t_3 := ab
x := t_3t_2
(using juxtaposition, as usual, for multiplication). t_1 would evaluate to 0 long before there would be an opportunity for canceling it.
 
kev said:
When Dr Anderson declared his discovery of 0/0 = nullity, he was almost universally derided.
He made a fool of himself through the claims he made and the hype he stirred up -- the derison was well-earned.
 
Last edited:
To do a less mathematical look at it, I have a program aid my poker playing, it shows me
for example, how often a player raises preflop, typically a very aggressive player might raise
preflop 20% of the time, a passive player might raise less then 5% of the time.
I get this value by (number of preflop raises)/(number of games played), now if a player
has not played before this would come out as 100% or extremely aggressive if 0/0=1.
So NaN is appropiate but undefined would make more sense (to me) in my example.
So in my example 0/0 = 'no idea how the aggressive the player is', which is not very helpful
when he has just pushed all his chips into the pot!
 
CRGreathouse said:
Your suggestion, then, is to have computers symbolically evaluate expressions? This seems to have little in common with Dr. Anderson's oddball suggestion of 'nullity'.

An expression like
x := a . b \left({b+c \over b . d}\right)
would conventionally be calculated as
t_0 := b+c
t_1 := bd
t_2 := t_0/t_1
t_3 := ab
x := t_3t_2
(using juxtaposition, as usual, for multiplication). t_1 would evaluate to 0 long before there would be an opportunity for canceling it.

Well, as mentioned before the rules of precedence would have to modifed slightly. I picture the evaluation being carried out more like this:

x := a . b \left({b+c \over b . d}\right)
would (un)conventionally be calculated with b=0 as
t_0 := b+c := 0+c := c
t_1 := bd = 0 . d (the zero is maintained like the imaginary part of a complex number)
t_2 := t_0/t_1 := c/(0.d) := e/0 where e = c/d
t_3 := ab := a.0
x := t_3.t_2 := a.e.0^0 := ae

This is a slightly improved version of the method I showed in the first post that is more consistent with new examples I have tested it on.
 
As others have said, your proposal doesn't look anything like Dr. Andersons.

How would your proposal attempt to evaluate

(b + b) / b

or

(b - b) / b

when b = 0? And what about

(b - b) / (b - 1)

when b = 1?
 
  • #10
kev, don't get me wrong -- I think it's at least interesting to consider such a system. I've been trying to make it work in my head: having 'shadow values' representing what something was before it turned zero, plus an index of zero-ness (number of times you can divide out zero before getting the 'shadow value'). But I'm not convinced that they work as intended, nor that they're useful in general.

But there are some concerns:
1. You shouldn't compare your system to Dr. Anderson's; they're not related, mechanically or philosophically.
2. You need to show how your system works in general, and that your system works as intended on interesting examples.
3. You should show what properties your system has. IEEE floating-point is commutative but not associative, for example; is yours either? (You can exclude, for a moment, zero-overflow.)
4. It would be interesting to comment on software and hardware efficiency. I could imagine this running 2-4 times slower than normal math with software emulation... is that how you see it?
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
908
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
862
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K