why cant a computer subtract, or can it, and its just easier to do it by adding?
Thanks to 2's complement, computers actually do subtraction by doing addition. See http://en.wikipedia.org/wiki/2's_complement
In the mathematical point of view, adding or subtracting are the same operation, i.e,
A - B = A + (-B)
That is a reason why computer science concern about signed and unsigned numbers. :rofl:
Its not that it is any more difficult to subtract in binary than to add --- just simply the fact that we don't need to build two sets of circuits, when one set can do both functions by simply using complement operation. By the way, the same can be done with decimal numbers, its just not quite as easy as with binary.
Separate names with a comma.