End of Moore's Law? Future of Transistors & Computing Tech

AI Thread Summary
The discussion centers on the future of transistors and computing technology in light of Moore's Law, which predicts that integrated circuit capacities double approximately every two years. While some argue that Moore's Law may be nearing its end, the consensus is that improvements in computing speed and cost will continue, albeit at a slower pace than in the past. Economic factors play a significant role, as companies must adapt their architectures to maintain competitiveness, often focusing on multi-core designs rather than sheer speed increases. Despite advancements, many consumer applications struggle to utilize multiple cores effectively, leading to diminishing returns on new hardware. Ongoing research into quantum effects and alternative materials may offer future pathways for transistor development, but practical applications remain distant.
Voq
Messages
26
Reaction score
11
What are indications for future development of transistors and general computing technology design with the Moore's law in mind? Are we going to redesign architecture for better efficiency and what future brings?
 
  • Like
Likes jsgruszynski
Engineering news on Phys.org
Moore's law predicted that integrated circuit capacities would double every 2 years.
But a lot of the articles claiming an end to Moore's Law seem to be arguing against any further improvements to IC densities.
Certainly, for the foreseeable future, computers will still keep getting faster and cheaper - but not at the pace we have seen over the last four decades.

This is mostly an issue of economics. And each new generation (of people) seems to come with a ravenous appetite for new devices - and a willingness to pay for them.

Here's an article that's looking at Moore's Law and to possible 2020 and 5 or 7nM technologies.
https://www.scientificamerican.com/article/end-of-moores-law-its-not-just-about-physics/
 
  • Like
Likes jsgruszynski
If you go to Google news and search for Moore's Law, you will find a half dozen articles every month for perpetuity. Some say it will end. Some say, "This is how we'll do it." The track record for the accuracy of those predictions is dismal. It has been that way for the past 40 years.

As @.Scott said, it is an economic question. Any semiconductor company that does not expect to keep up with Moore's Law is not planning for their own survival. But the details of how they'll actually achieve that are almost always a surprise. If we predict every possible outcome, one of the predictions will be correct.
 
  • Like
Likes Voq
I work in the field and we have been aware with the "end" of Moore's Law for a couple of decades now, although the "end" keeps getting pushed back "just a couple more years". One interesting data point is that the commercial lifetime of a semiconductor fabrication process is getting much longer. I did my PhD project 20 years ago. At the time the lifetime of a process was more or less five year. I did my project using the at-the-time state-of-the-art 0.25um CMOS process. That process *still* hasn't been end-of-lifed although it is getting tougher to access it. The current dominant general purpose process is 65 nm CMOS. It came out about 12 years ago and is still going strong for new design starts. This was unheard of 20 years ago. New processes are so much more expensive, yet don't offer enough to justify their use for many low and medium-volume applications.

For high-volume stuff like commercial processors or FPGAs, sure, people use the cutting edge stuff, but a lot of products still use 65nm or 180nm for cost reduction.

Now, back to Moore's Law. It hasn't yet ended but it is surely slowly down (see above). How are we dealing with it? The short answer is yes, architectures have changed. It used to be the way to advance was to go as fast as possible and every couple of years we would double in speed. Those days are over. Most processors (commercial PC type) top out around 3 GHz or so although you can buy special purpose ones that are faster. Most of the improvement since then has been new technology for developing multi-core structures, for example the new "Network on Chip" architecture that improves utilization of large processor arrays.

One other interesting tidbit. Last week I went to a talk about design tools for Superconducting Circuits. They were pretty much dead (except for *really* niche areas) since vanilla CMOS was eating their lunch. Now that CMOS isn't improving much, people are interesting in superconducting logic again. Fascinating.

You hear a lot about stuff like "carbon nanotubes" or "quantum computing" taking over but as someone in the business, those are way too far out in the future to really be competition for Moore.
 
  • Like
Likes Voq and berkeman
The basic/original Moore's law was about number/density of transistors, but for most of its existence that translated directly into performance. But while the number/density of transistors is still growing at close to Moores' pace, it's been 15 years since the link to performance was broken for consumer/PC processors. Adding cores so they can still say the processing power is there doesn't help for most applications that can't efficiently make use of more than one.

I just spent near $2 k on a new VR flight simulator rig that is all-but unusable and just a minor/incremental improvement over the 8 year old PC it replaced. So I don't agree that PCs are still getting faster and cheaper. I actually think this is a big problem. And I think Intel, Dell and Microsoft would probably agree. One can say that cell phones are killing PC sales, but if there's no point in buying a new computer because it isn't much of an "upgrade", you don't need cell phones to explain sales grinding to a halt.
 
  • Like
Likes BvU
russ_watters said:
The basic/original Moore's law was about number/density of transistors, but for most of its existence that translated directly into performance. But while the number/density of transistors is still growing at close to Moores' pace, it's been 15 years since the link to performance was broken for consumer/PC processors. Adding cores so they can still say the processing power is there doesn't help for most applications that can't efficiently make use of more than one.

I just spent near $2 k on a new VR flight simulator rig that is all-but unusable and just a minor/incremental improvement over the 8 year old PC it replaced. So I don't agree that PCs are still getting faster and cheaper. I actually think this is a big problem. And I think Intel, Dell and Microsoft would probably agree. One can say that cell phones are killing PC sales, but if there's no point in buying a new computer because it isn't much of an "upgrade", you don't need cell phones to explain sales grinding to a halt.
Why they can't use more than 1 core? And at least now you have good rig :).

So as it states. Number of transistors doubles every two years while the price halves.. And transistors got to the size of 50atoms and after that we are can't be certain because probability comes in place and we are unable to work with wave like properties of electrons? What problem it creates? And can they make them more dense in layers? Also that would mean there is a physical limit for construction of transistor and there must be certain number of them that can fit in some volume. Also technology we need to construct them must be limited somehow too.
 
Voq said:
Why they can't use more than 1 core?
Because many of the most intensive activities people perform on their computers are linear; you can't take the steps out of order. For a counterexample to illustrate, consider rendering a 3D movie. The movie is totally scripted, so you can chop it up into dozens of segments of 1 minute each and have different processors or computers render them, then assemble them into the final product. But for a video game, you can't have a processor jump ahead and pre-render a scene because the user is constantly changing what is going to happen next.

Multiple processors are good for when you have a lot of processor-intensive activities going on at once - which isn't common - but not very good for when you have one processor-intensive activity going on -- which is more common.
And at least now you have good rig :).
As I said, it ended up only being marginally better than what I had before. It was pretty disappointing.
 
  • Like
Likes Voq and Averagesupernova
russ_watters said:
Because many of the most intensive activities people perform on their computers are linear; you can't take the steps out of order. For a counterexample to illustrate, consider rendering a 3D movie. The movie is totally scripted, so you can chop it up into dozens of segments of 1 minute each and have different processors or computers render them, then assemble them into the final product. But for a video game, you can't have a processor jump ahead and pre-render a scene because the user is constantly changing what is going to happen next.

In the same time i can understand that linearity and not.
There is no way to differentiate that task in two tasks and to fuse them in real time? It is probably because of my lack of knowledge on how chip operates.
And in that linearity you need 1 core because it is your only way to get that function done. I am in fog a little bit.
 
russ_watters said:
Because many of the most intensive activities people perform on their computers are linear; you can't take the steps out of order.

Don't you mean non-linear? I thought linear meant you can do things in any order.

Cheers
 
  • Like
Likes Voq
  • #10
cosmik debris said:
Don't you mean non-linear? I thought linear meant you can do things in any order.
I'm not sure I used the best word, but that isn't what I was after. What I meant is the tasks have to be arrayed along a single path(line), unable to be subdivided because they depend on each other. I didn't want to use series vs parallel because that describes the paths, not whether or not the program can use them.
 
  • Like
Likes Voq
  • #11
There are a lot of "linear" tasks, i.e. tasks which depend on each other that can most certainly take advantage of multicore processing.

For example, electric circuit simulation is essentially an exercise in matrix inversion. While at any given time you only have one (large) matrix describing the state of the circuit, the matrix can be partitioned, decomposed, and inverted using parallel algorithms and then recombined for each time step. So you have a linear progression using parallel processing. It is really effective (amazingly so).
 
  • Like
Likes Voq
  • #12
Voq said:
So as it states. Number of transistors doubles every two years while the price halves.. And transistors got to the size of 50atoms and after that we are can't be certain because probability comes in place and we are unable to work with wave like properties of electrons? What problem it creates? And can they make them more dense in layers? Also that would mean there is a physical limit for construction of transistor and there must be certain number of them that can fit in some volume. Also technology we need to construct them must be limited somehow too.
There is research going on in the field of mesoscopic transport, the regime were electronic devices get so small that quantum mechanical effects need to be considered, together with the well understood classical laws of electronics. Theoretically, it could lead to transistors far smaller than they are now and that could operate with a single electron.
At my university, there is a group doing research on the physics in this regime. The professor even wrote a book about it:
https://www.amazon.com/dp/3527409327/?tag=pfamazon01-20
Really interesting stuff.
 
  • #13
SchroedingersLion said:
There is research going on in the field of mesoscopic transport, the regime were electronic devices get so small that quantum mechanical effects need to be considered, together with the well understood classical laws of electronics. Theoretically, it could lead to transistors far smaller than they are now and could operate with a single electron.
At my university, there is a group doing research on the physics in this regime. The professor even wrote a book about it:
https://www.amazon.com/dp/3527409327/?tag=pfamazon01-20
Really interesting stuff.

People have been working on single-electron transistors for 30 years. While yes, I totally agree it is really interesting stuff, I get frustrated that every small advance in semiconductor technology and science is described as "could lead to transistors far smaller than they are now" or similar. I realize these are PR departments doing this but it is one of my pet peeves.

Also, I would quibble with you about mesoscopic transport as the regime where quantum mechanical effects need to be considered. I would submit that the theory of semiconductors (aka part of the band theory of solids) depends fundamentally on quantum mechanics and if you don't understand (admittedly) basic quantum mechanics, you don't understand transistors. I spend a lot of my time fighting quantum mechanical effects in semiconductors (e.g. gate leakage, hot electron effects) and have my whole career.
 
  • #14
analogdesign said:
People have been working on single-electron transistors for 30 years. While yes, I totally agree it is really interesting stuff, I get frustrated that every small advance in semiconductor technology and science is described as "could lead to transistors far smaller than they are now" or similar. I realize these are PR departments doing this but it is one of my pet peeves.

Also, I would quibble with you about mesoscopic transport as the regime where quantum mechanical effects need to be considered. I would submit that the theory of semiconductors (aka part of the band theory of solids) depends fundamentally on quantum mechanics and if you don't understand (admittedly) basic quantum mechanics, you don't understand transistors. I spend a lot of my time fighting quantum mechanical effects in semiconductors (e.g. gate leakage, hot electron effects) and have my whole career.

Now you ruined the field for me :cry:
 
Back
Top