Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Patriot Missles system failures & the metric system

  1. Apr 24, 2006 #1

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Discovery Science did it to me again. I am now officially back on the anti-metric bandwagon.

    I learned that during the first Gulf War our Patriot missiles system were failing due to an inability to keep time accurately. When the errors were finally officially recognized, the word came down that the systems could not be run for long periods of time. What was not known was just what a “long period” was. They spoke of an incidence were a Patriot missed badly after 100hrs of continuous operation.

    They finally identified the culprit as round off error. They were running a counter in steps of .1 which is an infinitely repeating pattern in binary. (.0001001001001…) and cannot be precisely represented in a binary computer. The round off accumulated over time, rendering the system useless.

    This is a very real world example of why you should avoid using .1 as a basic step in any computation. An excellent alternative is to use a power of 2 step like 1/16 or 1/32 etc. While this size of step may seem a bit strange to the human mind, the CPU loves it.

    Unfortunately the metric system encourages the use of .1 as a fundamental step. While .1 is a nice multiple when you are doing arithmetic in your head, it is no benefit, indeed the source of errors, when the computer is doing the number crunching.

    It is interesting that the subdivisions of the inch are typically powers of 2, so the American system is inherently binary computer friendly, why in the world should we switch to the metric impossible step system?

    DOWN WITH THE METRIC SYSTEM!
     
  2. jcsd
  3. Apr 24, 2006 #2

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    What strikes me most about this? Even for industrial applications, I normally use 0.001 second increments at most. It's too bad that the US missile program can't keep up with me. :biggrin:
     
  4. Apr 24, 2006 #3

    -Job-

    User Avatar
    Science Advisor

    That's an excellent example of how round off errors and loss of precision can have some very undesirable effects. I don't think the answer is to abandon the metric system, since it is simpler for humans to deal with, in my opinion. :smile:
    No matter what unit system you use, there will always be tricky values.
     
  5. Apr 25, 2006 #4

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As long as you try force a base 10 number system into a digital computer you will be facing round off errors. As long as precision is not a concern this is fine. However, if you have need for precision in your computations you need to consider how a computer handles numbers.

    It would be far and away better simply to abandon base 10 systems in favor of base 2 systems. This can all be completely invisible to the user of your software, so people do not have to give up decimal, just computers.

    The biggest problem with the metric system is that it gets everyone thinking base 10 is the best and only way to think. As long as you are doing computations it is a very bad way to think.

    This is one of the reasons I am against the metric system, it sets a bad precedence.
     
  6. Apr 25, 2006 #5

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This was over 10yrs ago, computers have evolved a lot in that time. However they still cannot deal with .1.

    You would do well to take steps of [itex] 2 ^ {-10} [/itex] :approve:
     
  7. Apr 25, 2006 #6

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Come on, I did one millisecond timing on my PC in 1990...in Quick Basic no less! Same with PIC chips. They have been processing that fast for at least ten years. In fact that circuit that I showed you was using 1ms timers, and that was 7400 series stuff.
     
    Last edited: Apr 25, 2006
  8. Apr 25, 2006 #7

    Ivan Seeking

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Anyway, don't mean to derail the thread, but that smacks of cheesy engineering to me.
     
  9. Apr 25, 2006 #8

    -Job-

    User Avatar
    Science Advisor

    About an hour ago i used a 0.05 iteration in a program in the Lagrange interpolation thread and got round off error in under 20 iterations, you can see it there in the output posted. I'm guilty as well.
     
    Last edited: Apr 25, 2006
  10. Apr 25, 2006 #9

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I tend to agree with you assesment. But then if they had tried to use a milli second step, the errors would have shown up even faster!
     
  11. Apr 25, 2006 #10
    I'm not familiar with the specifics of the programming related to those missile systems, but whenever I have to count with decimals I change my representation so I'm using integers. So instead of calling a millisecond 0.001s I call it just what it is, 1 millisecond. Then if I need to show something in seconds I just divide the milliseconds by ten for my output, but keep counting in milliseconds so that my data is not off.

    Computers have trouble with any number that is not an integer, and they would have just as much trouble with reciprocals of powers of two. Computers would not represent 1/32 as 1/32, but as 0.0009765, which is just another decimal number like 0.1. The downfall of the program wasn't the metric system, but a problem inherent with trying to accuratly represent numbers that are not integers. That's something I learned in an introductory Pascal book in its coverage of loops.

    As a Canadian I learned on metric, and I love it. How many feet are in a mile? I don't know, and I don't know anyone who does off the top of their head. How many meters in a kilometer? Now that I can do. Metric is just like a written representation of scientific notation, so you only need to know one fundamental unit for each measure and then it scales up and down effortlessly... 1 km = 1e3 m, one mm = 1e-3 m and so on.
     
  12. Apr 25, 2006 #11

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This is simply incorrect. Computers store binary representations, not decimal. So 1/32 = 2^(-5)= .00001 (binary) this can be represented precisily in a computer. .1 cannot be.

    Once again the metric system is great for humans but sux for computer, because .1 and most of its multibles is rounded of by the computer. There is no way around this, as I pointed out in the inital post. You can mimimize the round off error by doing a multiplcation instead of addition. But the result is still an approximation.
     
  13. Apr 25, 2006 #12

    -Job-

    User Avatar
    Science Advisor

    1/32 = 0.03125 in decimal which is 0.00001 in binary.
    The problem is not with computers having trouble representing any non-integer numbers. Floating point numbers are represented in computer hardware in "binary scientific notation" which is of the form A*2^B. Where A is the significand (or mantissa) and B is the exponent. because processors handle only a finite number of bits, like 32 or 64 bits, this means that part of these bits are used for the exponent and part for the mantissa. Therefore, a computer's precision is dependent on how many bits are used for the mantissa. No matter how good the precision is, there'll always be numbers like 0.1 (decimal), or 0.3(3) (decimal) which, in binary, always equate to non-finite binary strings. For these numbers, their binary representation will be truncated or rounded to fit the number of bits allowed for the significand, which results in round off error. The error is very small, but over time it can accumulate.
    But there's plenty of non-integer numbers that can be represented in a computer without round off error.
    The only problem with the metric system is that it encourages increments/decrements of powers of 10, 0.1 being one such.
     
    Last edited: Apr 25, 2006
  14. Apr 25, 2006 #13

    NateTG

    User Avatar
    Science Advisor
    Homework Helper

    Actually there are quite a number of ways that computers can be made to readily deal with decimal numbers. It's quite feasible to build decimal computers, even if our modern technology is not well-suited to it. Moreover, programming techniques such as fixed point and arbitrary precision arithmetic are quite well established for some applications.

    http://www.siam.org/siamnews/general/patriot.htm

    Indicates that the problem was not actually rounding errors per se.
    Fundementally, the problem is that computers don't deal with numbers in any particularly sensible fashion. This isn't an isolated incident.
    http://ta.twi.tudelft.nl/users/vuik/wi211/disasters.html
     
  15. Apr 25, 2006 #14

    -Job-

    User Avatar
    Science Advisor

    You can build computers based on any numerical base, but i imagine they will be harder to create and manage (circuit-wise). A better alternative is to build an emulator that performs base 10 arithmetic and runs on a binary computer, without making use of the available floating point operations to perform floating point operations (i.e. use the available logical operations exclusively). It would be slower, but cheaper.
     
    Last edited: Apr 25, 2006
  16. Apr 25, 2006 #15

    Integral

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Nate,
    Thanks for the link, it provides a good description of the problem. It still came down to errors due to round off not being dealt with correctly.

    Another point mentioned on the program was that Iraqi modifcations to the Scud to get greater range may have rendered them unstable, sort of like a tumbling bullet, thus they would not have followed a predictable course. Making them pretty much impossible to hit.
     
  17. Apr 25, 2006 #16

    chroot

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Trust me, even if humans used base-2 units, people would find other ways to screw up software. What's really scary is that the people chosen to develop software to control missiles were so incompetent in the first place. What ever happened to design verification?

    - Warren
     
  18. Apr 25, 2006 #17
    Apparently I misunderstood the way numbers are stored in computer, but I see how that works now and why some decimal numbers have no binary representation. That article cleared the issue up nicely for me.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Patriot Missles system failures & the metric system
  1. Sound systems (Replies: 6)

  2. Embedded systems (Replies: 1)

  3. Operating Systems (Replies: 11)

Loading...