Next Generation Computer Hardware Coming Soon

Click For Summary
SUMMARY

The discussion centers on the imminent arrival of next-generation computer hardware, particularly the development of "Frankenstein chips," which utilize stacked chip technology to reduce latencies and power consumption. Key innovations include Intel's hybrid memory cubes capable of 1Tb transfer speeds and the formation of a consortium to standardize these technologies. The integration of heterogeneous architectures with multiple processor types is expected to enhance processing capabilities, especially for tasks like AI and transcoding. Additionally, advancements in ultra high-definition screens are set to leverage increased bandwidth, blurring the lines between individual computer components.

PREREQUISITES
  • Understanding of hybrid memory technologies, specifically Intel's hybrid memory cubes.
  • Familiarity with heterogeneous architectures and their implications for processing tasks.
  • Knowledge of bandwidth capacity measurement and its relevance to computer performance.
  • Awareness of current trends in display technologies, particularly OLED and ultra high-definition screens.
NEXT STEPS
  • Research Intel's hybrid memory cubes and their impact on future computing architectures.
  • Explore the principles of heterogeneous computing and its applications in AI and graphics processing.
  • Investigate the latest advancements in ultra high-definition display technologies and their market implications.
  • Learn about the design and functionality of Frankenstein chips and their potential in reducing power consumption.
USEFUL FOR

This discussion is beneficial for hardware engineers, computer architects, and technology enthusiasts interested in the future of computing and the evolution of computer hardware technologies.

wuliheron
Messages
2,149
Reaction score
0
All the pieces are falling into place rapidly for the development a new generation of computers that only vaguely resemble the ones we have today. The single biggest innovation to look forward to in next generation computer hardware is what I like to call Frankenstein chips. Just as shrinking components on a chip reduces latencies you can stack one chip right on top of another and/or side by side on a silicon "transposer" to create one massive chip sewn together from parts like Frankenstein's monster. This way the parts are closer together for lower power requirements and reduced latencies. With each chip being millimeters thick and the size of a finger nail the sky is the limit if you can deal with heat and other technical issues economically. The most frustrating thing for me about this technology is that it is completely unpredictable and the expensive Frankenstein chip you buy today could become obsolete overnight.

Stacking may sound like pie in the sky to some, but servers and smart phones already use stacked chips, HP has offered to put 2gb of their memristors on top of any existing chip, and stacks of up to 8 conventional ram memory chips can be made. The only remaining issue is cost and last year Intel demonstrated their new hybrid memory cubes with 1Tb transfer speeds and pictures of their upcoming Haswell chip indicate it was designed to use a transposer. Already a consortium of all the major manufacturers has formed to establish a new standard for hybrid memory cubes so they can replace traditional ram sticks as soon as possible. Using 70% less space and a 7 fold decrease in power they are ideal for portable applications. Because each contains its own controller chip for input and output the memory chips themselves can eventually be replaced with anything including nonvolatile phase change memory.

All the evidence indicates we're about to get slammed with a variety of Frankenstein chips and, as if that were not confusing enough, some of the individual chips will have heterogeneous architectures containing multiple different types of processors. As best I can determine 8 cpu processors is an ideal minimum for processing full blown matrices and roughly 80-300 simplified GPU processors are ideal for anything from transcoding to physics to AI. For a desktop PC that would mean more of the load normally placed on the video card today can be done on the APU and transferred over the currently underutilized PCI-e 3.0 bus. Exactly how these heterogeneous architectures will evolve is anyone's guess but, thankfully, they will incorporate hardware accelerated transactional memory making them easier to program.

The only way I know to evaluate the power of such monstrosities is by measuring raw bandwidth capacity and, apparently, a lot of that bandwidth will soon be taken up by ultra high definition screens. OLEDs continue to slowly come on the market, but LCD manufacturers now have a way to produce ultra high definition screens almost as cheaply as the current high definition LCD ones and are retooling their assembly lines as quickly as possible. To leverage the available bandwidth even better the first video cards capable of using system ram as well as vram for displaying graphics are already on the market perhaps indicating the shape of things to come. That is, computers where the distinctions between the individual components become increasingly blurred as the emphasis shifts to maximizing overall bandwidth potential by designing all the components to be flexible enough to assist each other in almost any task.
 
Computer science news on Phys.org
7 years later, what is the state of these predictions? OLED for sure not dominates the market. What is next up?
 

Similar threads

Replies
3
Views
2K
Replies
11
Views
6K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
18K
  • Sticky
  • · Replies 13 ·
Replies
13
Views
8K
Replies
29
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 18 ·
Replies
18
Views
6K