Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is x86 called otherwise disastrous X86 architecture and doex x64 improve?

  1. Nov 30, 2007 #1
    why is x86 called "otherwise disastrous X86 architecture" and doex x64 improve?

    why is x86 called "otherwise disastrous X86 architecture" and doex x64 improve?

    for example, http://www.theinquirer.net/gb/inquirer/news/2007/12/01/floating-point-bugs-explode

    but i have heard many claims that x86 sucks yet it seems to work fine with windows.


    does x64 improve on x86?

    the things i know of is the limited number of registers in x86, which x64 doubles
     
  2. jcsd
  3. Nov 30, 2007 #2

    jim mcnamara

    User Avatar
    Science Advisor
    Gold Member

    That article is about IEEE 754, the floating point standard, more than anything else.
    The software had a bug - it ignored integer overflow.

    About 95% of what you see on un-moderated sites is not worth even reading. That particular article was okay, but I see you got the wrong idea. The idea was:
    1. there are FP standards (IEEE 754) and they are changing
    2. people like to blame problems on something other than themselves.

    In terms of "sucks" the same thing applies. "experts" on the internet and elsewhere proclaim that Intel or AMD CPUs are bad. I don't think so. It's the same deal as your friend who says 'Never buy a car made by Ford' (You can change Ford to any car manufacturer.) People have bad experiences with computers as well as cars. Maybe they blame their trouble on Intel or AMD, I dunno. But bad drivers or uninformed computer users are easy to find, just like complaints.

    If those x86 chips were truly useless, why would they be sold in the hundreds of millions?
    All it would take is for some person to make a 'non-suck' chip and sell it. :)

    64 bit has a problem (not with the chip). A lot of software runs on on it but does not use it fully. It's like buying a race car and then running the car only in stop-and-go traffic at 5 miles per hour (or kph if you like).
     
    Last edited: Nov 30, 2007
  4. Nov 30, 2007 #3

    ranger

    User Avatar
    Gold Member

    Context!

    The term "x86 sucks" has no meaning if its used without context. A particular architecture will perform better at certain tasks that it can at others. Consider a couple of examples. If we take the TI89 and the HP49 graphing calculators, the former uses the Motorola 68000 processor and the HP49g an ARM processor. The hp49g has a very handy way of entering data into the calculator. It uses reverse polish notation (RPN). The Load And Store Architecture of the ARM processor is ideal for working with RPN. Data is popped and pushed onto the stack as needed, thus reducing the instruction set length. The Motorola 68000 uses a different approach to handle data. It isn't a load store architecture like the ARM. Instead its operands are found in registers and memory. So its not the best chip for implementing RPN. Its true that there are RPN programs for the 89, but they do not use the 68000 architecture efficiently. The different ways in which these architectures handle data will make them more suitable for a certain task.

    Another example is byte ordering. Intel uses little endian; Macs use big endian. Has someone ever told you that Macs are better in the computer graphics world? Its true to a certain extent, but have you ever wondered why? When we write/read data to/from a memory location, the data must be in the byte order that the architecture can understand. If a program uses the big endian byte order to represent data, but a little endian architecture wishes to make use of this program, then the architecture has no option but to reverse the byte order to make sense of the data. This is very inefficient because an intermediate step must always be done. Many popular graphics programs use big endian byte ordering, for example Adobe Photoshop. Well Macs are also big endian, so it is only natural for this architecture to be better at graphics designing with photoshop than an intel processor.
     
    Last edited: Nov 30, 2007
  5. Dec 21, 2007 #4
    Chances are esabah, the comp you are typing that one is a x86. Now ask yourself, does it suck. For that matter, how many PCs that you have used have really sucked. Anyways, most modern PCs are x86.
     
  6. Dec 28, 2007 #5
    Ranger, Macintoshes have been using Intel x86 chips, with Intel byte ordering, for several years now.

    So this is all off the top of my head so take with a grain of salt please, but:

    The thing to understand here is basically that to the extent the x86 architecture is disastrous, it's that way because it's old. The instruction set of x86 started its life as something fairly simple for the 8086 chip in the 80s, and has been growing more and more complexity and bizarreness ever since. Through every modern era of microprocessors, the x86 has endured, and both kept a fairly straight line of backward compatibility-- and also picked up new baggage-- in each era. On top of this you can add the multiple of coprocessor extentions, things like MMX/SSE, and the fact that AMD is now adding their own mutations (including the 64-bit extensions that later became x64). With all of this baggage the x86 instruction set is now a beast, a monster that requires a decent amount of effort just to tame before you can even get to processing. Every new x86 chip must live with all the sins of every past x86 chip.

    x64 if anything kind of makes things worse-- it simplifies some things, kicking out some of the older cruft and weirder addressing modes (but, frustratingly, not all of them). But in another way x64 has just been about adding yet another layer of strange exceptions-- since of course x64 chips must retain backward compatibility with x86, even though x64 is in many (but not all) ways different! (Although x64 does substantially address some of the non-instruction-set related, more fundamental complaints against the x86 architecture-- for example it adds some registers.)

    This burgeoning complexity happened at the same time that a lot of the movement in processor design has been, mostly during the 90s and just cementing in this decade, moving toward greater simplicity-- the successful new chips have all been about moving away from Intel-style instruction sets, where the cpu instructions are very expressive and almost like a programming language, and toward "RISC", a philosophy where it is considered better for three simple instructions to be issued than to issue one instruction that does three things.

    ...but, despite all this, the x86 has overwhelmingly won at market, completely taking over the consumer and server spaces while the RISCs have been exiled to embedded applications and video game systems. What gives?

    The thing is that the x86 architecture's disastrousness, really all things considered, isn't all that disastrous. Yeah, the instruction set is a pain for anyone who has to deal with it. Yeah, compiler authors would possibly be happier with something else. But who cares? The complexity forced by x86's checkered past really only effects a couple people writing compilers, and some poor group of people buried somewhere in Intel's design labs whose job it is to work around all that. The baggage of x86's past doesn't really impact how effectively the microchip in your computer runs.

    The reason for this is that the "architecture", the baggage of x86's past, is really only skin deep. At some point Intel figured out that just because they have a complex instruction set, doesn't mean that their chips have to be complex-- they could get all the advantages of the RISC chips or whatever they felt like, just by having the internal cores of the chip be all streamlined and simple and elegant, and having the complex x86 instructions be broken down by some external silicon that just passes microinstructions on to the core. Once you get sufficiently good at this translation-layer thing, your "architecture" doesn't matter! Other than architecture-imposed limitations like registers-- though again, x64 addresses that point to some extent-- you can pretty much just implement the chip however you want and then treat the x86 "architecture"/instruction set like something just to plug into. It would still maybe nicer to have the ISA be simple to begin with and not have to worry about a complex translation layer, but this vague advantage is no longer enough to overpower the inertia that the x86 chipmakers have.

    ... I still like VMX/altivec better than SSE, though... :(

    One thing to note, the inquirer quote you offer about the x86 architecture being "otherwise disastrous" appears to have actually been referring to the 8086 chip in the early 80s. I personally couldn't really tell you what was so disastrous about the 8086 compared to the other chips at the time...
     
    Last edited: Dec 28, 2007
  7. Dec 28, 2007 #6

    ranger

    User Avatar
    Gold Member

    True. I only used this example to justify the criteria of context when you speak of processor performance. Since most graphics were written for macs, it made sense. It also seems that photoshop has the option to save to intel or mac byte ordering. I still friends in graphic design that use G5s, though.
     
  8. Dec 29, 2007 #7
    afaik.. x64 is a microsoft marketing name used for their 64-bit computing products on the x86 platform... [which some refer to as x86_64]
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Why is x86 called otherwise disastrous X86 architecture and doex x64 improve?
  1. Dada engine for x86? (Replies: 2)

  2. RISC vs x86 gaming (Replies: 2)

Loading...