In terms of "maximum frequency of pulses" - this is actually ill-posed. Frequency of the pulses doesn't matter. It's the edges that matter: rise time and fall time, which define the pulse itself.
These edges are due to higher frequency harmonics of the clock so the upper bound depends not on the clock rate (frequency) of the pulses but on how many harmonics of that clock that define the edges and how well they can pass undistorted. This is all Fourier transform stuff.
To answer the rest of the discussion: how do microprocessors propagate 3 GHz clocks?
The simple answer: they don't propagate 3 GHz on/off chip. On board there are PLLs that multiply up slower external clock rates to full speed used on-chip. You can not clock data on and off chip at 3 GHz! Only at much slower data rates. And even with these PLLs, you can't synchronize edges on and off chip better than the slower external clock's edge accuracy.
The rule of thumb is your rise/fall times must be 10x your clock to preserve "reasonable edge placement accuracy" so you have to have 30 GHz for a 3 GHz clock. There's a formula for the exact accuracy but it depend on logic design and logic thresholds/margins.
Well, 30 GHz can not be supported on a modern digital IC packages even with the 1990s "digital rediscovers it's all analog" of "high speed digital packages". That's why it isn't done. Instead PLL clock multipliers are used and data is clocked off as much lower data rates or must live with the proviso of no end-to-end synchronicity (so-called isochronous transfers - you have to take a transfer in one high speed rate without handshaking back). You simply can't cheat the maximum speed of light which is also the maximum speed of information transfer.
So how does edge placement limit things? It works on-chip because being less than a wavelength, you can still treat things as "lumpy" with simply lumped components like resistors, capacitors and inductors. Above that frequency you can't trust things to be lumpy anymore (at the same physical dimensions). It's distributed models only like return losses and s-parameters of RF/microwave at that point. Digital itself and logic levels are lumped model approximations of analog component lumped approximations of distributed components and Maxwell's equations. So it breaks down fast.
It's akin to wavelength diffraction limits in optics and limits on spatial resolution with lenses. Basically your light wavelength has to be longer than the physical dimensions of interest to keep things lumpy. Spatial resolution of differences on intensity at discrete physical distance is a lumped model approximation of wave-based light.
How about this: what's the average dimension of a microprocessor die? ~1 cm. What edge frequency does that work out to? ~30 GHz for edges to line up correctly from one side of the die to the other side . So that means at clock rates of 3 GHz you will hit an edge placement wall on the die itself. That means you are limited to how well you can synchronize logic gates on one side of the die to the other. In other words, having microprocessor clock rates hit a brick wall at 3 GHz around year 2000 was not an accident, nor was the emergence of "cores" instead an accident. A physical performance wall was hit.
Synchronous logic design requires all gates clock in perfect lock-step synchronization. That get broken when you cross the edge placement limit (or cross the die) - it's called "jitter". Going off-package is an even longer distance and thus supports even lower edge rates and thus clock rates. How much of digital circuits (microprocessors, memory, networking, etc. ICs) involve synchronous logic design? How about 99.99% of it!
Which hits synchronous logic hard but you can bypass it with asynchronous logic design. Guess what "microprocessor cores" are, essentially? N cores are N synchronous (but physically a bit smaller - no jitter limit) logic/computing elements connected by asynchronous links operating at lower frequency. It's a baby step to moving to 100% asynchronous. And all the issues of multiprocessing complexity with semaphores, locks, deadlocking, multiprocessing, etc. start to loom large.
It's all this that makes me highly dubious of brain-machine utopias like "singularity". Complete garbage if you even partially assume computers that look like current computers. The reality is the asymptotic limit of "asynchronous cores" will actually result in something very much like humans but with all of the limitation in terms of "interconnectivity" of minds (you are still alone and the mutual unintelligibility of "Babel" will still exist) and component reliability (no immortality or preservation of the unique (lumpy!) ego, sorry folks). Just another life form that might even find current humans too tedious and backwards to allow to exist. Best case.
BTW, ever wonder why we moved from PATA (parallel (!) ATA) to SATA (serial ATA)? Isn't parallel moving more data faster by putting the bits in parallel rather than stuffing them serially down a single pipe? The answer is exactly the same: edge-placement accuracy reached a limit with PATA - you can't assure the transitions of adjacent parallel bit lines are lined up properly and still go faster - it's a speed limit. It turns out you can go faster by going serial because there are no parallel clocks to synchronize. Transfers can be isochronous without worrying about specific timing - as long as it gets there eventually it's OK. Logic gates on opposite sides of a die are the same parallel clock synchronization problem at PATA.
Of course, you face the speed limit again with edge placement but it's a simpler 1-dimensional problem now. But instead it's 100% jitter-limited based on the media used to propagate. This is why Apple & Intel developed the "Light Peak/Thunderbolt" optical interface - you can't go faster synchronously at "human scale" distances anymore (cable length from your computer to an external drive or display), even serially, using electrical transmission line propagation and still assure synchronization of clock on opposite sides of the communication channel.