He makes this claim in a pair of papers: Perspex Machine VIII: Axioms of Transreal Arithmetic and Perspex Machine IX: Transreal Analysis on his personal web site. The papers were or will be published by the Proceedings of the Society of Photo-Optical Engineers, which, I must admit, was not one of the mathemetical journals with which I am familiar.

The first paper introduces a formal system which is vaguely related to the standard axioms of arithmetic with which everyone is familiar. However, it would need to be studied as a totally separate system since it has a number of significant differences which I think make its usefulness a little doubtful. For an excellent overview of the major problems read: Open Letter to James Anderson.

There are some major problems IMHO without delving deeply into the mathematical structures which Dr Anderson's formal system is

**not**.

The most striking first problem is that the 'what you do to one side you do to the other' rule taught to school children does not work. This is caused by a special non-number that Dr Anderson introduces called

*nullity*and written

`Φ`.

For example, it is not the case that if

`a + b = a + c`then

`b = c`. Any school child will be familiar with the idea that you could subtract

`a`from both side of that equation to reveal that

`b = c`.

Unfortunately, the introduced of

`Φ`means that if

`a = Φ`,

`b`and

`c`can be absolutely anything at all. This occurs because on of the axioms of Dr Anderson system is that

`Φ + a = Φ`(and addition is commutative).

So if you are going to subtract something from both side of an equation you need to make sure that it's not

`Φ`. That's a little like the following case in regular arithmetic where you have to ensure that

`a`is not

`0`: if

`b / a = c / a`then

`b = c`. Under Dr Anderson's system things are even more complex:

`a`must not be

`0`or

`Φ`or positive or negative infinity (also non-numbers that Dr Anderson introduces in the first paper with related axioms).

So just regular work with equations gets a little tricky.

The "problem" that Dr Anderson wishes to "solve" it appears is that nobody (but he seems particularly worried about computers) can divide by 0. This is a well known fact, or you might think of it as an axiom, you can't divide by 0, or, put another way, the result of dividing by 0 is undefined.

People "solve" this problem when they see a divide by zero by saying "That doesn't work then"; computers "solve" this problem with an exception, or error, and by assigning the result of a divide by zero the special name

`NaN`(which stands for Not a Number). The computer program has to be designed to either never divide by zero (many programs specifically check to see if they are about to and signal an error), or deal with the exception, or sometimes they'll crash.

Dr Anderson's solution is that a computer should be allowed to divide by zero and instead of having an exception, or crashing, etc. you'll get the result

`Φ`. In the BBC article he says:

"Imagine you're landing on an aeroplane and the automatic pilot's working," he suggests. "If it divides by zero and the computer stops working - you're in big trouble. If your heart pacemaker divides by zero, you're dead."

OK, Dr A. so how does

`Φ`solve this?

I can give you the answer right here: it doesn't. And that's because Dr. A's

`Φ`is

*cancerous*. As soon as variable becomes

`Φ`everything it touches becomes

`Φ`. It's the number equivalent of King Midas: everything it touches turns to

`Φ`.

That's because of two axioms in the first paper:

`Φ + a = Φ`and

`Φ × a = Φ`.

So, basically instead of getting an exception, or error, Dr. Anderson's arithmetic gets rid of the problem and replaces it with

`Φ`. From a programming perspective it's irrelevant, if your auto-pilot suddenly computes that required speed is

`Φ`or your pacemaker wants

`Φ`beats per minute it's useless.

The answer is simple: don't divide by zero. It's undefined!