Maximum frequency of electrical pulses through wire?

Click For Summary
SUMMARY

The maximum frequency of electrical pulses through wire is significantly influenced by self-inductance and reactance, particularly at high frequencies. The formula for self-inductance, L=0.002 l [ln(2 l/r) - 3/4], indicates that inductance increases with wire length and radius. As frequencies rise, inductive reactance (X_{L}= 6.28fL) increases, reducing current flow. Transmission lines can handle frequencies above 3 GHz, but practical limits exist due to parasitic effects and material properties, with optical computing emerging as a viable alternative for terahertz and petahertz frequencies.

PREREQUISITES
  • Understanding of self-inductance and inductive reactance
  • Familiarity with transmission line theory
  • Knowledge of electromagnetic wave propagation
  • Basic principles of signal generation using piezoelectric crystals
NEXT STEPS
  • Research "Transmission line theory and high-frequency applications"
  • Study "Inductive and capacitive reactance in AC circuits"
  • Explore "Waveguide technology for high-frequency signal transmission"
  • Investigate "Optical computing and its advantages over traditional electronics"
USEFUL FOR

Electrical engineers, circuit designers, and researchers in high-frequency electronics and optical computing will benefit from this discussion.

  • #31
This is actually a very difficult question to answer because the concepts of 'putting on a signal' and 'wire' are somewhat vague. And, does one need to get the signal back or is just 'putting it on' sufficient?

But to me this is more of a question of the best driver and receiver one can possibly build. If the 'wire' has the bandwidth but no driver can produce that frequency then that bandwidth is wasted and that signal can never be 'put on' the wire.

And so what if a 'wire' is lossy. As long as the receiver can 'pick up' the attenuated signal you're fine. Receiver techniques and materials improve all the time. How can we state what a 'best' receiver will do?

So honestly I have no idea how to answer the question. I think the only thing that makes sense is an answer like Ivan Seeking's which unfortunately I am not qualified to comment on.
 
Engineering news on Phys.org
  • #32
In terms of "maximum frequency of pulses" - this is actually ill-posed. Frequency of the pulses doesn't matter. It's the edges that matter: rise time and fall time, which define the pulse itself.

These edges are due to higher frequency harmonics of the clock so the upper bound depends not on the clock rate (frequency) of the pulses but on how many harmonics of that clock that define the edges and how well they can pass undistorted. This is all Fourier transform stuff.

To answer the rest of the discussion: how do microprocessors propagate 3 GHz clocks?

The simple answer: they don't propagate 3 GHz on/off chip. On board there are PLLs that multiply up slower external clock rates to full speed used on-chip. You can not clock data on and off chip at 3 GHz! Only at much slower data rates. And even with these PLLs, you can't synchronize edges on and off chip better than the slower external clock's edge accuracy.

The rule of thumb is your rise/fall times must be 10x your clock to preserve "reasonable edge placement accuracy" so you have to have 30 GHz for a 3 GHz clock. There's a formula for the exact accuracy but it depend on logic design and logic thresholds/margins.

Well, 30 GHz can not be supported on a modern digital IC packages even with the 1990s "digital rediscovers it's all analog" of "high speed digital packages". That's why it isn't done. Instead PLL clock multipliers are used and data is clocked off as much lower data rates or must live with the proviso of no end-to-end synchronicity (so-called isochronous transfers - you have to take a transfer in one high speed rate without handshaking back). You simply can't cheat the maximum speed of light which is also the maximum speed of information transfer.

So how does edge placement limit things? It works on-chip because being less than a wavelength, you can still treat things as "lumpy" with simply lumped components like resistors, capacitors and inductors. Above that frequency you can't trust things to be lumpy anymore (at the same physical dimensions). It's distributed models only like return losses and s-parameters of RF/microwave at that point. Digital itself and logic levels are lumped model approximations of analog component lumped approximations of distributed components and Maxwell's equations. So it breaks down fast.

It's akin to wavelength diffraction limits in optics and limits on spatial resolution with lenses. Basically your light wavelength has to be longer than the physical dimensions of interest to keep things lumpy. Spatial resolution of differences on intensity at discrete physical distance is a lumped model approximation of wave-based light.

How about this: what's the average dimension of a microprocessor die? ~1 cm. What edge frequency does that work out to? ~30 GHz for edges to line up correctly from one side of the die to the other side . So that means at clock rates of 3 GHz you will hit an edge placement wall on the die itself. That means you are limited to how well you can synchronize logic gates on one side of the die to the other. In other words, having microprocessor clock rates hit a brick wall at 3 GHz around year 2000 was not an accident, nor was the emergence of "cores" instead an accident. A physical performance wall was hit.

Synchronous logic design requires all gates clock in perfect lock-step synchronization. That get broken when you cross the edge placement limit (or cross the die) - it's called "jitter". Going off-package is an even longer distance and thus supports even lower edge rates and thus clock rates. How much of digital circuits (microprocessors, memory, networking, etc. ICs) involve synchronous logic design? How about 99.99% of it!

Which hits synchronous logic hard but you can bypass it with asynchronous logic design. Guess what "microprocessor cores" are, essentially? N cores are N synchronous (but physically a bit smaller - no jitter limit) logic/computing elements connected by asynchronous links operating at lower frequency. It's a baby step to moving to 100% asynchronous. And all the issues of multiprocessing complexity with semaphores, locks, deadlocking, multiprocessing, etc. start to loom large.

It's all this that makes me highly dubious of brain-machine utopias like "singularity". Complete garbage if you even partially assume computers that look like current computers. The reality is the asymptotic limit of "asynchronous cores" will actually result in something very much like humans but with all of the limitation in terms of "interconnectivity" of minds (you are still alone and the mutual unintelligibility of "Babel" will still exist) and component reliability (no immortality or preservation of the unique (lumpy!) ego, sorry folks). Just another life form that might even find current humans too tedious and backwards to allow to exist. Best case.

BTW, ever wonder why we moved from PATA (parallel (!) ATA) to SATA (serial ATA)? Isn't parallel moving more data faster by putting the bits in parallel rather than stuffing them serially down a single pipe? The answer is exactly the same: edge-placement accuracy reached a limit with PATA - you can't assure the transitions of adjacent parallel bit lines are lined up properly and still go faster - it's a speed limit. It turns out you can go faster by going serial because there are no parallel clocks to synchronize. Transfers can be isochronous without worrying about specific timing - as long as it gets there eventually it's OK. Logic gates on opposite sides of a die are the same parallel clock synchronization problem at PATA.

Of course, you face the speed limit again with edge placement but it's a simpler 1-dimensional problem now. But instead it's 100% jitter-limited based on the media used to propagate. This is why Apple & Intel developed the "Light Peak/Thunderbolt" optical interface - you can't go faster synchronously at "human scale" distances anymore (cable length from your computer to an external drive or display), even serially, using electrical transmission line propagation and still assure synchronization of clock on opposite sides of the communication channel.
 
  • #33
Is it not Inter Symbol Interference, rather than "Edges" that counts? The shape of a waveform as it approaches the value at sampling time is much less relevant than the width of the 'eye'. Shannon's theorem is, in the end, what applies on and between chips.
 
  • #34
A fascinating read, jsgruszynski (Is that some kind of Polish name?). I'll probably need to read it a few times to fully absorb all that information. This whole thread is an interesting read (being the kind of question I usually ask), but that's a gem.
 

Similar threads

Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
613
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 21 ·
Replies
21
Views
5K
Replies
5
Views
2K
  • · Replies 27 ·
Replies
27
Views
821
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K