Remapping Computer Circuitry to Avert Impending Bottlenecks

AI Thread Summary
The discussion centers on the implications of advancements in computing infrastructure, particularly regarding algorithm adaptation. There is skepticism about the necessity to revise existing algorithms, similar to the adjustments required for parallel processors. The claim that multiple cores were introduced due to increased real estate is challenged; instead, it is argued that the shift was driven by the need to manage heat generation as clock speeds plateaued. The conversation highlights that significant breakthroughs in semiconductor technology, such as the introduction of copper and high-k/low-k dielectrics, have already occurred, suggesting that current technology remains sufficient for the time being. The potential changes in memory and processor integration are acknowledged, but it is noted that the underlying mathematics of algorithms may remain unchanged, allowing for continued job security in the field. The discussion emphasizes that while advancements may occur, the practical implementation of these changes may not necessitate a complete overhaul of existing algorithms.
Simfish
Gold Member
Messages
811
Reaction score
2
Technology news on Phys.org
I don't see any major change described in this article. Just a lot of buzzwords.

One thing is completely wrong in this article: multiple cores were not added because more real-estate became available. That is completely ridiculous. You can make the die as large as you want. Multiple cores came into existence because of excessive heat generation: the clock speed could no longer be increased, so extra cores were added to increase processing power.

And the absence of a “major breakthrough” is simply because the current technology was/is still good enough. “Computer designers” will not make the change. Semiconductor manufacturers will. That's only going to happen when it makes sense financially. Fabs are really expensive as is.

EDIT: there were major breakthroughs by the way. Copper was a huge one around the turn of the millennium. High-k/low-k dielectrics also. So Mr. "Computer Designer" should probably not talk about things he has no clue about.

Regarding the algorithms, the math is still the same. The implementation does not really matter. If anything, it gives you job security.

EDIT2: if what the article claims does indeed happen, it's not clear whether you will have to change the algorithms. The idea of bringing memory and processor closer is not new at all. Your CPU has level-1 and level-2 caches. It's all transparent though. You can take advantage of them explicitly if you want to (with some SSE instructions), but it's not required. It could be done the same way.
 
Last edited:
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top