- #1
wuliheron
- 2,155
- 0
All the pieces are falling into place rapidly for the development a new generation of computers that only vaguely resemble the ones we have today. The single biggest innovation to look forward to in next generation computer hardware is what I like to call Frankenstein chips. Just as shrinking components on a chip reduces latencies you can stack one chip right on top of another and/or side by side on a silicon "transposer" to create one massive chip sewn together from parts like Frankenstein's monster. This way the parts are closer together for lower power requirements and reduced latencies. With each chip being millimeters thick and the size of a finger nail the sky is the limit if you can deal with heat and other technical issues economically. The most frustrating thing for me about this technology is that it is completely unpredictable and the expensive Frankenstein chip you buy today could become obsolete overnight.
Stacking may sound like pie in the sky to some, but servers and smart phones already use stacked chips, HP has offered to put 2gb of their memristors on top of any existing chip, and stacks of up to 8 conventional ram memory chips can be made. The only remaining issue is cost and last year Intel demonstrated their new hybrid memory cubes with 1Tb transfer speeds and pictures of their upcoming Haswell chip indicate it was designed to use a transposer. Already a consortium of all the major manufacturers has formed to establish a new standard for hybrid memory cubes so they can replace traditional ram sticks as soon as possible. Using 70% less space and a 7 fold decrease in power they are ideal for portable applications. Because each contains its own controller chip for input and output the memory chips themselves can eventually be replaced with anything including nonvolatile phase change memory.
All the evidence indicates we're about to get slammed with a variety of Frankenstein chips and, as if that were not confusing enough, some of the individual chips will have heterogeneous architectures containing multiple different types of processors. As best I can determine 8 cpu processors is an ideal minimum for processing full blown matrices and roughly 80-300 simplified GPU processors are ideal for anything from transcoding to physics to AI. For a desktop PC that would mean more of the load normally placed on the video card today can be done on the APU and transferred over the currently underutilized PCI-e 3.0 bus. Exactly how these heterogeneous architectures will evolve is anyone's guess but, thankfully, they will incorporate hardware accelerated transactional memory making them easier to program.
The only way I know to evaluate the power of such monstrosities is by measuring raw bandwidth capacity and, apparently, a lot of that bandwidth will soon be taken up by ultra high definition screens. OLEDs continue to slowly come on the market, but LCD manufacturers now have a way to produce ultra high definition screens almost as cheaply as the current high definition LCD ones and are retooling their assembly lines as quickly as possible. To leverage the available bandwidth even better the first video cards capable of using system ram as well as vram for displaying graphics are already on the market perhaps indicating the shape of things to come. That is, computers where the distinctions between the individual components become increasingly blurred as the emphasis shifts to maximizing overall bandwidth potential by designing all the components to be flexible enough to assist each other in almost any task.
Stacking may sound like pie in the sky to some, but servers and smart phones already use stacked chips, HP has offered to put 2gb of their memristors on top of any existing chip, and stacks of up to 8 conventional ram memory chips can be made. The only remaining issue is cost and last year Intel demonstrated their new hybrid memory cubes with 1Tb transfer speeds and pictures of their upcoming Haswell chip indicate it was designed to use a transposer. Already a consortium of all the major manufacturers has formed to establish a new standard for hybrid memory cubes so they can replace traditional ram sticks as soon as possible. Using 70% less space and a 7 fold decrease in power they are ideal for portable applications. Because each contains its own controller chip for input and output the memory chips themselves can eventually be replaced with anything including nonvolatile phase change memory.
All the evidence indicates we're about to get slammed with a variety of Frankenstein chips and, as if that were not confusing enough, some of the individual chips will have heterogeneous architectures containing multiple different types of processors. As best I can determine 8 cpu processors is an ideal minimum for processing full blown matrices and roughly 80-300 simplified GPU processors are ideal for anything from transcoding to physics to AI. For a desktop PC that would mean more of the load normally placed on the video card today can be done on the APU and transferred over the currently underutilized PCI-e 3.0 bus. Exactly how these heterogeneous architectures will evolve is anyone's guess but, thankfully, they will incorporate hardware accelerated transactional memory making them easier to program.
The only way I know to evaluate the power of such monstrosities is by measuring raw bandwidth capacity and, apparently, a lot of that bandwidth will soon be taken up by ultra high definition screens. OLEDs continue to slowly come on the market, but LCD manufacturers now have a way to produce ultra high definition screens almost as cheaply as the current high definition LCD ones and are retooling their assembly lines as quickly as possible. To leverage the available bandwidth even better the first video cards capable of using system ram as well as vram for displaying graphics are already on the market perhaps indicating the shape of things to come. That is, computers where the distinctions between the individual components become increasingly blurred as the emphasis shifts to maximizing overall bandwidth potential by designing all the components to be flexible enough to assist each other in almost any task.