Ranger, Macintoshes have been using Intel x86 chips, with Intel byte ordering, for several years now.
So this is all off the top of my head so take with a grain of salt please, but:
The thing to understand here is basically that to the extent the x86 architecture is disastrous, it's that way because it's old. The instruction set of x86 started its life as something fairly simple for the 8086 chip in the 80s, and has been growing more and more complexity and bizarreness ever since. Through every modern era of microprocessors, the x86 has endured, and both kept a fairly straight line of backward compatibility-- and also picked up new baggage-- in each era. On top of this you can add the multiple of coprocessor extentions, things like MMX/SSE, and the fact that AMD is now adding their own mutations (including the 64-bit extensions that later became x64). With all of this baggage the x86 instruction set is now a beast, a monster that requires a decent amount of effort just to tame before you can even get to processing. Every new x86 chip must live with all the sins of every past x86 chip.
x64 if anything kind of makes things worse-- it simplifies some things, kicking out some of the older cruft and weirder addressing modes (but, frustratingly, not all of them). But in another way x64 has just been about adding yet another layer of strange exceptions-- since of course x64 chips must retain backward compatibility with x86, even though x64 is in many (but not all) ways different! (Although x64 does substantially address some of the non-instruction-set related, more fundamental complaints against the x86 architecture-- for example it adds some registers.)
This burgeoning complexity happened at the same time that a lot of the movement in processor design has been, mostly during the 90s and just cementing in this decade, moving toward greater simplicity-- the successful new chips have all been about moving away from Intel-style instruction sets, where the cpu instructions are very expressive and almost like a programming language, and toward "RISC", a philosophy where it is considered better for three simple instructions to be issued than to issue one instruction that does three things.
...but, despite all this, the x86 has overwhelmingly won at market, completely taking over the consumer and server spaces while the RISCs have been exiled to embedded applications and video game systems. What gives?
The thing is that the x86 architecture's disastrousness, really all things considered, isn't all that disastrous. Yeah, the instruction set is a pain for anyone who has to deal with it. Yeah, compiler authors would possibly be happier with something else. But who cares? The complexity forced by x86's checkered past really only effects a couple people writing compilers, and some poor group of people buried somewhere in Intel's design labs whose job it is to work around all that. The baggage of x86's past doesn't really impact how effectively the microchip in your computer runs.
The reason for this is that the "architecture", the baggage of x86's past, is really only skin deep. At some point Intel figured out that just because they have a complex instruction set, doesn't mean that their chips have to be complex-- they could get all the advantages of the RISC chips or whatever they felt like, just by having the internal cores of the chip be all streamlined and simple and elegant, and having the complex x86 instructions be broken down by some external silicon that just passes microinstructions on to the core. Once you get sufficiently good at this translation-layer thing, your "architecture" doesn't matter! Other than architecture-imposed limitations like registers-- though again, x64 addresses that point to some extent-- you can pretty much just implement the chip however you want and then treat the x86 "architecture"/instruction set like something just to plug into. It would still maybe nicer to have the ISA be simple to begin with and not have to worry about a complex translation layer, but this vague advantage is no longer enough to overpower the inertia that the x86 chipmakers have.
... I still like VMX/altivec better than SSE, though... :(
One thing to note, the inquirer quote you offer about the x86 architecture being "otherwise disastrous" appears to have actually been referring to the 8086 chip in the early 80s. I personally couldn't really tell you what was so disastrous about the 8086 compared to the other chips at the time...