When can I buy a laptop with specs similiar to these?

  • Thread starter Thread starter JoshHolloway
  • Start date Start date
  • Tags Tags
    Laptop
AI Thread Summary
The discussion revolves around the feasibility of laptops with advanced specifications, including a 5-10 GHz processor, 4-10 GB RAM, and 1-2 TB hard drive space, being available by Q3 2008. Participants express skepticism about achieving such high clock speeds and storage capacities in laptops, suggesting that while desktops may see advancements, laptops will prioritize power efficiency over raw speed. The conversation also touches on Moore's Law, questioning its relevance as clock speeds have plateaued and performance improvements shift towards multi-core processors. There is a consensus that the gaming industry drives technological advancements, yet the need for extreme specifications, such as 10 GB of RAM, is debated. Overall, the thread highlights the challenges and expectations surrounding future laptop technology.
JoshHolloway
Messages
221
Reaction score
0
5-10 GHz Clockspeed processor (Possibly IBM cell) (possible 2 proscessors)
4-10 GB Ram
1-2 TeraByte Hard Drive Space
Double layer Blue Ray writer
High Definition
TV in (coax,rca,s-video,component,fiber audio)
Two TV Outs simultatiniously (same formats as video in)
Tablet
Wi Fi (SUPER HIGH SPEED)
Voice recognitions
Windows Vista
Blue ray drive Backward compatible with (SACD,CD ROMCDR,CDRW,DVD+R,DVD-R,DVD,DVD AUDIO)
64-bit archetecture

Do you thinkn there will be a laptop on the market by Q3 2008 with these specs? What do you think?
 
Computer science news on Phys.org
With the rate that technology changes I'd say it's fairly hard to speculate a date.
 
I think that I that a computer with roughly the specs I described will be on the marked late 2008. Do you think that sounds reasonable? THat is when I plan on buying my next computer.
 
Well, standard laptop hard drive capacity is ~80gb, processor speed ~3GHz, DVD DL writers fot laptops came out, so did 64bit arc, and the voice recognition and 802.11g WiFi. Vista "is sceduled to come out" in 2006, but it'll come out in 2007/2008. your processor might be developed around 2007, your hard drive might be out around 2009, and the RAM, around 2008. when your laptop is released around July 2009, it'll be around $8K, though it may vary from $4K to $10K
 
I dissagree, livingpool. Although a laptop with those specs in late 2008 (if is is out at that time), it would think it would be more like 3500 max. Look at the top of the line computers now, they are about 3500.
 
The best way to figure it out is likely taking a look at the last ten years of the gaming world and how it's demanded the increase in technology. Without gaming we'd still be using PIIs.
 
Sorry about the grammer above, I was taking on the phone while typing.
 
I would say it could be possible with a desktop pc, but not a notebook. The trend with notebooks has been on conserver power and battery life than high speed computing.
 
dduardo said:
The trend with notebooks has been on conserver power and battery life than high speed computing.

True, but alienware and now dell with the gen xps have released high powered laptops and will continue to do so as gamers move more to mobile solutions.
 
  • #10
You make a good point dd. I agree with you now that I think about it. But what about mores law. I know that is says that roughly every 18 months th number of transistors on a microprocessor doubles. Is this still happening? I mean has this happened in the last 18 months. Becuase all I really know about is clock speed and about 16 months go there was 3.4 GHz processors on the market, and I think the fastest now is like 3.8. Why has it not doubled? If the number of transistors on the chip is not directly related to the clockspeed, would you be so kind to explain to me what the number of transistors matters in terms of performance? If you don't want to answer the question just tell me and I will look it up. I just kind of would like to have short two way conversation on the subject.
 
  • #11
And if you all don't mind could you briefly explain to me what is the most important factors of performance in the specs of a CPU, because it is obviously not just clockspeed. I think clockspeed is just easy to advertise, so that's why everyone knows about it. Again, if I am bugging you with questions please just say so and I will shut up.
 
  • #12
JoshHolloway said:
And if you all don't mind could you briefly explain to me what is the most important factors of performance in the specs of a CPU, because it is obviously not just clockspeed. I think clockspeed is just easy to advertise, so that's why everyone knows about it. Again, if I am bugging you with questions please just say so and I will shut up.

http://www.kitchentablecomputers.com/processor2.htm
 
  • #13
Thanks friend!
 
  • #14
JoshHolloway said:
5-10 GHz Clockspeed processor (Possibly IBM cell) (possible 2 proscessors)
4-10 GB Ram
1-2 TeraByte Hard Drive Space
Double layer Blue Ray writer
High Definition
TV in (coax,rca,s-video,component,fiber audio)
Two TV Outs simultatiniously (same formats as video in)
Tablet
Wi Fi (SUPER HIGH SPEED)
Voice recognitions
Windows Vista
Blue ray drive Backward compatible with (SACD,CD ROMCDR,CDRW,DVD+R,DVD-R,DVD,DVD AUDIO)
64-bit archetecture
Do you thinkn there will be a laptop on the market by Q3 2008 with these specs? What do you think?


When I read the thread title I thought it said where, so consequently I burst out laughing when reading the specs.

Forget Cell and Vista, not going to happen at all. I'd be surprised to ever see an MS OS run natively on the Cell. I think they're too much in bed with Intel and x86 for that.

That said, you're looking more 2009 at the soonest. You might get a desktop like that by then, but not a laptop. As for 5-10 GHZ, prolly not ever going to happen. We seem to have it the limit of practical clockspeed increases at about 4 GHZ. Its simply much easier and cheaper to double the number of processors than double the clock speed anymore. Look for parallel computing, not faster computing.

As for voice recognition, why? Keyboard input is far, far faster. I can easily type faster than I can talk, so I don't any use in voice recognition really, other than reducing carpal tunnel. Not something I'm interested in personally.
 
  • #15
Not to mention the cost of that laptop. At such a price you'll be better off with a high end gaming machine with 2 30-inch Apple displays and 400 watt logitechs.
 
Last edited:
  • #16
Well if this laptop is not out by 08 then I will build it from scratch. Could someone give me some tips on how I can make a microprocessor with a sewing machine, my solding iron, and a hot glue gun?
 
  • #17
JoshHolloway said:
5-10 GHz Clockspeed processor (Possibly IBM cell) (possible 2 proscessors)
4-10 GB Ram
1-2 TeraByte Hard Drive Space
Double layer Blue Ray writer
High Definition
TV in (coax,rca,s-video,component,fiber audio)
Two TV Outs simultatiniously (same formats as video in)
Tablet
Wi Fi (SUPER HIGH SPEED)
Voice recognitions
Windows Vista
Blue ray drive Backward compatible with (SACD,CD ROMCDR,CDRW,DVD+R,DVD-R,DVD,DVD AUDIO)
64-bit archetecture
Do you thinkn there will be a laptop on the market by Q3 2008 with these specs? What do you think?

In a laptop form factor, I think probably never, unless you expect to carry a backpack around for a power source, battery energy density increases dramatically or you keep it permanently plugged into wall power. HD storage density would have to increase by an order of magnitude to squeeze a TB into a 2.5" FF drive. CPU speeds seem to have plateaued for the moment at 3.2-3.8 GHz, with Intel and AMD starting to focus more on increasing performance/watt rather than performance/GHz. Most laptops only have room for 2 memory slots, so memory chip density would probably have to double to reach your specs.

The rest of it is already doable or is probably not very far away.

Naturally, I expect my prediction to be proven in 2-3 years.
 
  • #18
Are we not getting to a point now, where it is becoming increasing harder to get higher clock speeds with the technology we use?

Anyway, why on Earth would anyone need something that powerfull.. I can't think of any reason on the workstation side of things to need 10G of mem.. Of course on servers running some enterprise DB it is already in use..

The Storage space is the only thing I can see that is warrented
 
  • #19
There's something called the "speed of light limit". If the clock is too fast, then, of two consecutive ticks, the second tick might occur before the first tick has had time to propagate across the whole chip. You can visualize this with a very large CPU chip, 300,000 Km long for instance. If that chip's clock ticks more than once a second, then before the first tick reaches the outer limits of the chip, some components of the chip have already received and acted on the second tick (the ones closer to the clock). Computers are generally designed with a synchronous model in mind (that's why we have the clocks on the chips) so this isn't good. For a 6*6 cm synchronous chip, light would need at most 2*10^-10 seconds to travel 6 cm. So the clock shouldn't tick more than 10^10 times a second. That's what, about 10Ghz? I should probably check this stuff online, but a 10Ghz synchronous processor might not be even possible on a 6 cm chip, (you'd have to make much smaller, keep the clock in the center, or delay the components a bit depending on distance from the clock)).
In order to get really fast we will eventually need asynchronous designs, probably with multiple processors, or chips with multiple sub-processors.
Before we ever get to 10Ghz it would probably be must faster and cheaper to have 5 2Ghz processors, or 2 5Ghz processors (if the motherboard is well designed). There's always some overhead in producing a really fast processor, the architecture might need to change significantly, so in the end the CPU might be able to perform instructions really fast but, on average, the # of instructions per high level commands or # of operations per instruction might be much higher. Of course there's also some overhead in the implementation of multiple processors. For one, storing/retrieving memory contents for use in instructions becomes more tricky, but also there will a slowdown from a software perspectivee because, to prevent deadlocks or data corruption the OS will have to restrict the amount of parallelelism that is actually used.
However, there's no "speed of light limit" with asynchronous machines and you would be able to use currently available processors, the difference would be in the motherboard and OS.
I also think that, in the future, clock speed will become less important because, as wireless networks expand and become more powerful, we may be getting close to the point where PCs will be simple machines with an internet connection where most of the processing is actually done in very powerful servers off somewhere.
 
Last edited:
  • #20
Some quick reseach shows

-Intel released the 500mhz P! sometime in mid-1999
-Intel released the 1ghz P! in mid-2000 (approximately 12 months)
-Intel released the 2ghz P4 in August-2001 (approximately 14 months)
-We're still waiting for 4ghz (84 months and counting)

I'd say Moore's law is pretty much dead.
 
  • #21
I also think that, in the future, clock speed will become less important

Its already that case now.. From manufacturer to manufacturer its like comparing apples and pears
 
  • #22
Wi Fi (SUPER HIGH SPEED)

We are already at super high speed WIFI.. I covered this before I think..

Its called OFDM Orthogonal Frequency Division Multiplexing It works by spliting the signal into multiple subsignals on differing frequencies...

http://en.wikipedia.org/wiki/COFDM
 
  • #23
Anttech said:
Anyway, why on Earth would anyone need something that powerfull.. I can't think of any reason on the workstation side of things to need 10G of mem.. Of course on servers running some enterprise DB it is already in use..

Like I said above, the gaming industry drives the advancement of computer technology.
 
  • #24
Like I said above, the gaming industry drives the advancement of computer technology.
Can these people not just learn to code properly then :-p But yeh your right.. the more funky the visualisation the better specs your going to need... But still 10G of mem.. Youd have to be a really bad programmer to need that amount of mem
 
Last edited:
  • #25
Anttech said:
Can these people not just learn to code properly then :-p But yeh your right.. the more funky the visualisation the better specs your going to need... But still 10G of mem.. Youd have to be a really bad programmer to need that amount of mem

Just wait till they develop life like graphics and have to load world models into your memory. You're going to need more than 10gb :smile: For example how about simcity where you can build a city and have it be like you were walking around the real new york. It'll happen some day.
 
  • #26
russ_watters said:
Some quick reseach shows
-Intel released the 500mhz P! sometime in mid-1999
-Intel released the 1ghz P! in mid-2000 (approximately 12 months)
-Intel released the 2ghz P4 in August-2001 (approximately 14 months)
-We're still waiting for 4ghz (84 months and counting)
I'd say Moore's law is pretty much dead.


Moore's law isn't dead. We can fit the number of transistors onto a chip. You can run a P4 at 7Ghz. The problem is heat dissipation. The heat dissipation increases exponentially, while clock spead increases linearly. At 4 Ghz, you're looking at 130W of heat dissipation. Thats A LOT of heat to remove out of the system. You start to need premium liquid cooling at that point basically.

@Anttech:

We need more and more memory and hardware because that's the business model of Wintel. Well, that and what Greg pointed out as well, but even without that, MS and Intel would be pushing the upgrade train as hard as possible. That just gives them a semi legitimate reason.
 
  • #27
franznietzsche said:
We need more and more memory and hardware because that's the business model of Wintel. Well, that and what Greg pointed out as well, but even without that, MS and Intel would be pushing the upgrade train as hard as possible. That just gives them a semi legitimate reason.

I wouldn't say the requirement of more and more memory and hardware over the last few years is just the business mode of Wintel. Lots of commercial vendors, nowadays, are requiring steeper and steeper system specifications. Oracle 9i was able to run somewhat decently with 1GB of memory, and now, it's recommended that you have 2GB of memory. That's just one example of the price you pay for more features in a product (and there's certainly a number of improved or additional features in Oracle 10g compare to Oracle 9i -- and you won't see me complaining about the steeper system requirements)
 
  • #28
graphic7 said:
I wouldn't say the requirement of more and more memory and hardware over the last few years is just the business mode of Wintel. Lots of commercial vendors, nowadays, are requiring steeper and steeper system specifications. Oracle 9i was able to run somewhat decently with 1GB of memory, and now, it's recommended that you have 2GB of memory. That's just one example of the price you pay for more features in a product (and there's certainly a number of improved or additional features in Oracle 10g compare to Oracle 9i -- and you won't see me complaining about the steeper system requirements)


This much is true, but I'm talking about consumer desktops. The fact that I can run a perfectly functional desktop system on a PIII 500 Mhz 128 MB RAM, with office suite, web browsing, and email (all open simultaneously) that runs noticeably faster than the P4 1.0 Ghz laptop with 256 MB RAM WinXP, with only firefox open says something powerful.

As for higher end servers, especially database servers, the sheer amount of data being handled does require more resources. Managing 100,000 customer counts does take a lot of memory and processor power. So do RHD simulations unfortunately.
 
  • #29
franznietzsche said:
This much is true, but I'm talking about consumer desktops. The fact that I can run a perfectly functional desktop system on a PIII 500 Mhz 128 MB RAM, with office suite, web browsing, and email (all open simultaneously) that runs noticeably faster than the P4 1.0 Ghz laptop with 256 MB RAM WinXP, with only firefox open says something powerful.
Absolutely! I still prefer "primitive" desktop environments, like CDE, over what's available today: Gnome, KDE, etc. mostly because of the memory footprint and performance. Desktop environments seem keen adding more and more features that only waste memory and processing power, nowadays, and Windows' Explorer isn't the only environment that's guilty of this (I stated the others previously :smile: ).

I do think there's a minimal amount of memory you should have in a Windows workstation, nowadays, for it to be usable, and my ideal amount is far from 256MB (that was actually what the "usable" amount was 3 or 4 years ago with Windows 2000). I can run OpenOffice, Opera, Adobe Acrobat, and the Microsoft Services for UNIX NFS client quite comfortably in 768MB of memory on my Windows multimedia system at home. I noticed a huge increase in performance when I upgraded from 512MB of memory to 768MB awhile back.

Edit: I actually have 2GB of memory in my UNIX workstation at home; however, I do much more with it than I do with my Windows multimedia system. Usually, I'm running at least 3 or 4 zones ("virtual instances") of Solaris 10, so I can try out software like Oracle, Sun Cluster, IBM DB2, and NIS/DNS/DHCP/LDAP setups without installing software on my "actual" system. As of now, it looks like I still have 1.4GB of memory free, running 3 zones (each zone runs the same set of processes as the actual system, for now). Just goes to show how much more you can do with an environment that utilizes system resources properly.
 
Last edited:
  • #30
Why would anyone need such a laptop?

I'm waiting until next year for a powerbook with a intel memron in it, surly that will be a fast enough portable computer for anyone?
 
  • #32
And for the recxord, Apple will never make Os's for x86es because one big difference between PPCs and x86es is that every piece of hardware in a PPC was made to work well with the other pieces of hardware. On x86es, a company like Dell would take a graphics card, a processor a network card, sound card etc... whose creators had no idea that it wouyld be going into the new Dell blahblahblah. When workers in ATI make a graphics card, they don't care about which computer it will be in and what hardware it will work with. Like my windows computer didnt boot for the first week that I got it because the BIOS tried to boot from a memory stick reader. Apple would not ever want to make users frustrated with hardware conflicts.
 
  • #33
Livingod said:
And for the recxord, Apple will never make Os's for x86es because one big difference between PPCs and x86es is that every piece of hardware in a PPC was made to work well with the other pieces of hardware. On x86es, a company like Dell would take a graphics card, a processor a network card, sound card etc... whose creators had no idea that it wouyld be going into the new Dell blahblahblah. When workers in ATI make a graphics card, they don't care about which computer it will be in and what hardware it will work with. Like my windows computer didnt boot for the first week that I got it because the BIOS tried to boot from a memory stick reader. Apple would not ever want to make users frustrated with hardware conflicts.

While I agree with you that x86 hardware is of lower-quality than Apple's PPC hardware, Apple is pushing OS X for x86 and a number of people already using it on their x86 systems:

http://www.osx86project.org/

And keep in mind, that not all x86 vendors forget about hardware validation. You're also forgetting that some of the hardware in an Apple is commodity peecee hardware, like an Nvidia or ATI graphics card, as well as the hard drive and what not.
 
  • #34
Livingod said:
And for the recxord, Apple will never make Os's for x86es because one big difference between PPCs and x86es is that every piece of hardware in a PPC was made to work well with the other pieces of hardware. On x86es, a company like Dell would take a graphics card, a processor a network card, sound card etc... whose creators had no idea that it wouyld be going into the new Dell blahblahblah. When workers in ATI make a graphics card, they don't care about which computer it will be in and what hardware it will work with. Like my windows computer didnt boot for the first week that I got it because the BIOS tried to boot from a memory stick reader. Apple would not ever want to make users frustrated with hardware conflicts.


I don't know where you have been for the last six months, but Apple has dropped their PPC lines altogether. They will soon be releasing Intel x86 computers ONLY.

And your problem with the computer booting from a memory stick sounds like some retard screwed with the settings because he thought it'd be funny. Its NOT the same thing as a hardware conflict.
 
  • #35
graphic7 said:
While I agree with you that x86 hardware is of lower-quality than Apple's PPC hardware, Apple is pushing OS X for x86 and a number of people already using it on their x86 systems:
http://www.osx86project.org/
And keep in mind, that not all x86 vendors forget about hardware validation. You're also forgetting that some of the hardware in an Apple is commodity peecee hardware, like an Nvidia or ATI graphics card, as well as the hard drive and what not.

How do you mean lower quality, do IBM make better designed processors than intel?
 
  • #36
rho said:
How do you mean lower quality, do IBM make better designed processors than intel?

The PowerPC architecture has a much more elegant design than i386 -- anyone that's ever done a bit of assembley programming on i386 and PPC will tell you this. IBM also has enough faith in PPC to use it on their low-end pSeries/RS6000 systems, where the POWER just isn't a viable option because of cost. Keep in mind, the POWER and PowerPC have similar features. The fact that the PowerPC is based off of an enterprise-level processor, the POWER, should tell you something, unlike the "high-end" x86 processors, like the Opteron or EMT64, which still share too many commonalities with their prodecessors -- all the way back to the 8086.
 
Last edited:
  • #37
For the last six months? I didn't know that the new iMac G5 they released was an intel, ohh, that's right, it's a PPC.

and Jobs said that those who supported osx86project and used os x on their x86es will, and I quote, "burn in hell".
http://osx86project.org/index.php?option=com_content&task=view&id=44&Itemid=2
 
Last edited by a moderator:
  • #38
Livingod said:
For the last six months? I didn't know that the new iMac G5 they released was an intel, ohh, that's right, it's a PPC.

and Jobs said that those who supported osx86project and used os x on their x86es will, and I quote, "burn in hell".
http://osx86project.org/index.php?option=com_content&task=view&id=44&Itemid=2


Your sarcasm is cute.

http://www.google.com/search?q=Appl...ient=firefox-a&rls=org.mozilla:en-US:official

Try using google more often.
 
Last edited by a moderator:
  • #39
I think it's positive that we are moving towards a common architecture but I'm not going to comment on which architecture should be the common one.
 
  • #40
-Job- said:
I think it's positive that we are moving towards a common architecture but I'm not going to comment on which architecture should be the common one.

We shouldn't be moving towards a "common architecture." None of the processors on the market today are perfect for every consumer -- each consumer wants a particular feature out of each processor. Examples:

An x86 processor is perfect for the home user and low-end enterprise -- they're cheap and perform, but they fail to scale well and aren't reliable, and thus aren't a viable option for the enterprise.

The PPC targets a similar market as the x86 processor (however it does scale well), but has failed to "latch on" because of the cost factor.

The more exotic processors like SPARC and POWER, are not an option for the home user, mostly because a new UltraSPARC IV+ or POWER 4 (and 5) will cost thousands or even tens of thousands of dollars for a single processor; however, with the POWER, you get performance and scalability. With the SPARC, you get scalability and reliability.

Point is, this whole convergence to x86 is going to leave a lot out of picture. Sure, it's cheaper, but you're sacrificing a lot just for that fact. For the home user, the x86 is an excellent processor, but there are people out there that are pushing x86 into the enterprise (via Linux-powered clusters and other nonsense that doesn't work out) -- a place it doesn't belong.
 
Last edited:
  • #41
graphic7 said:
The PowerPC architecture has a much more elegant design than i386 -- anyone that's ever done a bit of assembley programming on i386 and PPC will tell you this. IBM also has enough faith in PPC to use it on their low-end pSeries/RS6000 systems, where the POWER just isn't a viable option because of cost. Keep in mind, the POWER and PowerPC have similar features. The fact that the PowerPC is based off of an enterprise-level processor, the POWER, should tell you something, unlike the "high-end" x86 processors, like the Opteron or EMT64, which still share too many commonalities with their prodecessors -- all the way back to the 8086.

Thank you for the info :smile:

I'v never had a intel computer before only PPC and I'm going to buy a new powerbook next year, what do you think of the intel portable roadmap? (multi-core stuff like memron). Is multi-core the way to go for laptops?
 
  • #42
graphic7 said:
We shouldn't be moving towards a "common architecture." None of the processors on the market today are perfect for every consumer -- each consumer wants a particular feature out of each processor. Examples:
An x86 processor is perfect for the home user and low-end enterprise -- they're cheap and perform, but they fail to scale well and aren't reliable, and thus aren't a viable option for the enterprise.
The PPC targets a similar market as the x86 processor (however it does scale well), but has failed to "latch on" because of the cost factor.
The more exotic processors like SPARC and POWER, are not an option for the home user, mostly because a new UltraSPARC IV+ or POWER 4 (and 5) will cost thousands or even tens of thousands of dollars for a single processor; however, with the POWER, you get performance and scalability. With the SPARC, you get scalability and reliability.
Point is, this whole convergence to x86 is going to leave a lot out of picture. Sure, it's cheaper, but you're sacrificing a lot just for that fact. For the home user, the x86 is an excellent processor, but there are people out there that are pushing x86 into the enterprise (via Linux-powered clusters and other nonsense that doesn't work out) -- a place it doesn't belong.

I think i have to challenge your statement that x86 processors aren't scalable and reliable. The new Intel Xeon 64 bits processors scale quite well from what I've seen. Besides, my statement was from a software perspective, less architectures makes life easier for software companies and provide consumers (& enterprises) with a wider range of options. In this sense we should be moving towards a common architecture.
 
  • #43
-Job- said:
I think i have to challenge your statement that x86 processors aren't scalable and reliable. The new Intel Xeon 64 bits processors scale quite well from what I've seen. Besides, my statement was from a software perspective, less architectures makes life easier for software companies and provide consumers (& enterprises) with a wider range of options. In this sense we should be moving towards a common architecture.

With the "new Xeons" can you hot-swap CPUs, while the system is under a load? Can you reconfigure memory (yes, that means swapping DIMMs) while the system is in use? POWER and SPARC certainly can do this like it's a piece of cake.

Oh, and my definition of how well something scales is whether it can handle >= 64 processors in a system efficiently. Keep in mind, this can be done with POWER and SPARC -- take a look at the Sun Fire 25k (up to 74 UltraSPARCIV+), Fujistsu PrimePower 2500 (up to 128 SPARC64V), or the IBM pSeries p5 590 (up to 64 POWER5 processors). These also aren't nodes that use interlinks and crossbars, like the Altix does. This said, these large, monolithic systems scale well under all workloads, not just some, like the Altix.
 
Last edited:
  • #44
graphic7 said:
With the "new Xeons" can you hot-swap CPUs, while the system is under a load? Can you reconfigure memory (yes, that means swapping DIMMs) while the system is in use? POWER and SPARC certainly can do this like it's a piece of cake.


Wow. That's certainly all I can say about that. Though, I assume of course one can't do this in a single processor system.


Also, these systems, like the Sun 25k are very very expensive. I forget which model, i think it was the 25k but i'd have to check was just shy of $1.5 Million.
 
  • #45
franznietzsche said:
Wow. That's certainly all I can say about that. Though, I assume of course one can't do this in a single processor system.
Also, these systems, like the Sun 25k are very very expensive. I forget which model, i think it was the 25k but i'd have to check was just shy of $1.5 Million.

Yeah, the Sun Fire 15k and 25k run in that range, but if you need high-availability this is the route to go. You can literally upgrade a Sun Fire 15k to a 25k -- this would be the equivalent of upgrading your PC's motherboard, while the system is still turned on. On the other hand, Sun offers this "dynamic reconfiguration" functionality (the ability to swap processors and memory and keep the system available) on the lower-end with the V480 and E2900 -- these run around in the $30k-$150k price range; however, IBM reserves dynamic reconfiguration for the high-end (but you get LPAR functionality much cheaper than you do with Sun).

Edit: Nope, you can't do dynamic reconfiguration with a single processor system. In fact, IBM and Sun both require you to buy a system with two processors or more if it has dynamic reconfiguration functionality.
 
Last edited:
  • #46
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.
 
  • #47
-Job- said:
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.


POWER is IBM, SPARC is Sun Microsystems.
 
  • #48
I think it's cool that soon i might be able to have triple boot of Mac OS, Windows and Linux. Of course Apple wouldn't need to abandon the PPC platform, but i can imagine it would very costly to have two OS versions for two entirely different architectures. With a common architecture consumers will have a wider field of software options, and software makers will easily be able to make their products more compatible.
 
  • #49
-Job- said:
Wow, hotswapping processors and dimms? What kind of companies are we talking about here? IBM? Because i don't think the typical enterprise, who isn't building massively parallel computers with node cards, will absolutely have to hotswap processors :smile: , i would recommend blade servers.
I think the advantages of POWER or SPARK must be visible only in very high end applications such as research.

I work in a hospital environment, and we're a small enterprise, by definition. Needless to say, we have approximately 30 POWER systems that are capable of processor hot swapping. Why? Some of those systems have to be available 24/7/365 -- that means zero downtime, otherwise, people's lives could on the balance. For something as simple as a bad processor, the system should handle the failure appropriately and let someone remove the processor and replace it with a new one -- all while keeping the system available.

If you don't see the need for this in your environment, chances are you aren't in an enterprise.

At my work do we have "massive parallel computers"? No.
At my work are we doing research? No, yet we still have a need for near fault-tolerant systems, like the ones Sun and IBM provide.
 
Last edited:
  • #50
I still don't see how how hotswapping of processors is an essential feature. Most likely the computers you'll want to have up & running 24/7 would be servers. You can easily have multiple servers sharing the load, and when one of them goes down the rest can easily fill in for it while you repair it. Especially with blade servers which are so efficient and so small you can easily have this. IMO hotswapping is a neat feature but not an essential one.
 
Back
Top