Related Computing and Technology News on Phys.org
russ_watters
Mentor
The limiting factor is economics - what the ISP chooses to sell you. It would be expensive and few people would buy it, so why bother selling it?

So, you are telling me that if a quite wealthy and generous individual wanted to, they could start a company that could mass produce 10Terabit/s download speeds? Because the way I see it, that would revolutionize communications.

russ_watters
Mentor
So, you are telling me that if a quite wealthy and generous individual wanted to, they could start a company that could mass produce 10Terabit/s download speeds?
Sure.
Because the way I see it, that would revolutionize communications.
How so? The typical hard drive can only write data at about 50 MB/sec on a good day and HDTV is only about 5, so what use is vastly more bandwidth? What are we downloading that needs that wide of a pipe?

Well, nothing I suppose yet...however, technology seems to be expanding exponentially (perhaps an exaggeration), but with the up and coming quantum computers, we may be sending all sorts of large files that a computer such as these will be able to process much faster.

K^2
technology seems to be expanding exponentially (perhaps an exaggeration)
Not an exaggeration. The growth is actually exponential. The name for it is "Moore's Law". Feel free to look it up.

russ_watters
Mentor
....except that Moores law as applied to processor power is over. It ended about 5 years ago! The reason is heat dissipation.

Last edited:
Isn't moores law re:transistor density/count not processing power?

With quad core and an emphasis on multitasking, effective processing power is still going up. Solutions are becoming all the more clever compared to 'add mOAR transisters'.!

I remember my P166 MMX didn't even have a fan, just a heat sink.

russ_watters
Mentor
Yes, the original Moore's law was about transistor count, which is why I said "as applied to". Transistor count did, for a long time, directly correllate into increased processing power, so K^2's characterization was a workable interpretation of Moore's law for a long time. It has also been applied to things like hard drive space: http://en.wikipedia.org/wiki/Moore's_law#Other_formulations_and_similar_laws

With quad core and an emphasis on multitasking, effective processing power is still going up.
I would argue that you have it backwards. Multi-cores are being utilized because single-core processing power has stagnated, but it isn't necessarily easy to utilize multiple cores, so the usable (IMO, usable=effective) power of a processor is not increasing proportional to the increase in transistors anymore.

mheslep
Gold Member
Sure. How so? The typical hard drive can only write data at about 50 MB/sec on a good day and HDTV is only about 5, so what use is vastly more bandwidth? What are we downloading that needs that wide of a pipe?
Agreed. I've made that point before in various system designs where folks want to vastly over specify the communication pipe bandwidth based on the notion that in the future they'll be able upgrade the processing function and use the fat pipe. Perhaps ten years ago that made sense, but not any more, as to my mind there's an upper limit how much data humans can make good use of, I guess no more than a couple of video bandwidths equivalents. Of course if 1000 video bandwidths is almost as cheap as one, fine, but otherwise people just can't handle any more than a couple at a time, even if the processing portion of a system can. Still, there are plenty of fat pipe providers out there trying to skate up hill on the subject.

Well, nothing I suppose yet...however, technology seems to be expanding exponentially (perhaps an exaggeration), but with the up and coming quantum computers, we may be sending all sorts of large files that a computer such as these will be able to process much faster.
Certainly there are cases for systems that receive and filter large data streams down to something use-able by humans. But these cases are relatively rare; it doesn't make to sense design mass market telecommunication system where the default is to deliver huge data streams all the way out to every end user, where only then is it filtered down.

Last edited:
There is the internet2 : http://www.internet2.edu/ which links various research facilities around the US... here's a map: http://www.internet2.edu/pubs/Internet2CombinedInfrastructureTopology.pdf [Broken] But I couldn't, quickly, find bandwidth numbers.

I have a friend who is working on creating distributed immersive environments, both for art and science. And I can imagine places like the Large Hadron Collider being able to stuff the pipe pretty well at times.

Last edited by a moderator:
Evo
Mentor
So, you are telling me that if a quite wealthy and generous individual wanted to, they could start a company that could mass produce 10Terabit/s download speeds? Because the way I see it, that would revolutionize communications.
They would need to be quite wealthy and generous. The equipment needed would astronomically expensive, and as was pointed out you couldn't make use of it on your home computer. Probably not going to see that for awhile.

Hm, well it just seems like we are not being very efficient if we have the capabilites to be sending data as fast as our computers can process it, yet we are not utilizing this. I want to be able to download and upload information at very short wait times. Like video game patches and such, keeping ping below 50ms regardless of the server spot. I want this for christmas...

I want this for christmas...
To quote Billy Bob Thornton:
Wish in one hand, sh*t in the other. See which one fills up faster.

Mech_Engineer
Gold Member
You know Google is getting ready to announce the winner of their residential Gigabit fiber internet project. They plan to run gig fiber to a minimum of 50,000 homes. Look it up:

Hello, I was just having a discussion with my father about download speeds. As I have researched, places are getting up to 128MB/s download speeds. These places include japan and korea as far as I am aware.
it's possible to get up to a little over 100Mbit/s fiber internet in a very few places in the US from what I understand. It's very expensive though...

Also I have researched that certain labs have produced 14 Terabits/second download speeds which is remarkably faster than my home ISP 56k (LOL, JK).
Can you provide a link? I'm not aware of any technology capable of that kind of speed (let alone an internet connection), from what I understand we're just starting to get into 100Gbit fiber for exotic networking connections...

I was curious, what is the limiting factor in bandwidth and download speeds. And if we have successfully produced 14Tbps download speeds, what are the difficulties in mass producing this communication speed?
The real limitation is provide infrasturucture. Imagine the hardware needed to provide 14Tbps to 1000 or 10,000 homes...

Last edited by a moderator:
Well, to be honest with you, I cant imagine that at all, as Im not familiar with the hardware it takes to provide such internet capabilities. What kind of hardware runs fibre optics? Obviously im speaking independently of the fibres and pulsating lights....what other hardware is involved?

Also, is it possible to provide your own home ISP. For instance. What would be needed if I wanted to build my own internet connection at say....10MB/s download speeds...keeping my pings under 100ms. Assume I am rich. Cost estimation?

Also, is it possible to provide your own home ISP. For instance. What would be needed if I wanted to build my own internet connection at say....10MB/s download speeds...keeping my pings under 100ms. Assume I am rich. Cost estimation?
Villages in the UK are doing just this. It is far from cheap though.

They pay BT (who own the fibre optic networks) to send a link to their village, they then set up their own system to distribute it between the people living there.

Interesting. What kind of hardware is necessary to start a fibre optic internet service, to just a small group of homes. Or even just one home?

russ_watters
Mentor
Agreed. I've made that point before in various system designs where folks want to vastly over specify the communication pipe bandwidth based on the notion that in the future they'll be able upgrade the processing function and use the fat pipe. Perhaps ten years ago that made sense, but not any more, as to my mind there's an upper limit how much data humans can make good use of, I guess no more than a couple of video bandwidths equivalents. Of course if 1000 video bandwidths is almost as cheap as one, fine, but otherwise people just can't handle any more than a couple at a time, even if the processing portion of a system can. Still, there are plenty of fat pipe providers out there trying to skate up hill on the subject.

Certainly there are cases for systems that receive and filter large data streams down to something use-able by humans. But these cases are relatively rare; it doesn't make to sense design mass market telecommunication system where the default is to deliver huge data streams all the way out to every end user, where only then is it filtered down.
I had a thought-process on the issue trying to figure out why we would want more than, say, 50 MB/sec (hard drive speed)...

First, I'd start by thinking about what the end-use of a computer is in basic terms (again, we're talking about household use). The purpose is to display a series of ~60fps images in the screen in response to inputs...sometimes accompanied by sound. Whether that's a game, a video clip or just the response-rate of a text editor, that's the data that a computer outputs. The limit I see on usefulness is the data throughput required for that - which is roughly 5 MB/sec, blu-ray data speed. Nothing a network can do faster than that can have an immediate impact on the user.

Taking that further, right now companies are trying to bring the processing back to their servers, allowing only a remote UI, which protects the program and enables a pay-as-you-go model. With the above throughput, they could apply that to virtually anything, including video games: you could play a graphically intense video game without much of a processor if you had a fast internet connection and the remote server provided the processing power, only sending back the pictures. Still, no need for more than 5 MB/sec.

Now there are limited examples where people might want more processed faster, such as with video editing or in my case, astrophotography image enhancement (this takes a surprising amount of resources, on par with video editing). I suppose it would be cool to edit a full-length blu-ray quality video and have it render in three seconds, sending it to a remote company and back. But I'm not sure companies are going to want to use their processing power in that way....and we still come back to the hard drive limitation of ~50 MB/sec.

russ_watters
Mentor
Hm, well it just seems like we are not being very efficient if we have the capabilites to be sending data as fast as our computers can process it, yet we are not utilizing this.
What you were suggesting is much faster than our computers can handle the data. It is inefficient to have something you don't use!
I want to be able to download and upload information at very short wait times. Like video game patches and such, keeping ping below 50ms regardless of the server spot. I want this for christmas...
There's another issue here: the download rate has as much to do with the server providing the data as with the available pipeline for things like that. And again, what you described and can utilize is several orders of magnitude slower than what you suggested should be available.
Well, to be honest with you, I cant imagine that at all, as Im not familiar with the hardware it takes to provide such internet capabilities. What kind of hardware runs fibre optics? Obviously im speaking independently of the fibres and pulsating lights....what other hardware is involved?
Well try imagining the support equipment. 1 terabit/second is 1.25 million megabyes per second. So many 50 megabyte per second hard drives operating in parallel would you need to serve a single 1 terabit/sec user?

Russ_watters, so if i hypothetically had 25,000 50 MB/s hard drives...how would I set that up to operate my home network? Obv a hypothetical question :) And what if (this is actually practical) I wanted to have 1 GB/s download, do you have any idea on the cost estimation if I set it up myself at my house?

Also, what kind of internet would I need to be operating these to have that available bandwidth for a 1GB/s down

Last edited:
russ_watters
Mentor
Russ_watters, so if i hypothetically had 25,000 50 MB/s hard drives...how would I set that up to operate my home network?
Well, it's just a large room full of refrigerator-sized racks of relatively normal hard drives, connected together by a relatively basic network -- except for the pipeline out of the room.
Obv a hypothetical question :) And what if (this is actually practical) I wanted to have 1 GB/s download, do you have any idea on the cost estimation if I set it up myself at my house?
Here's a link that says 155 megabits will run you $20,000-$45,000 a month: http://www.broadbandnational.com/business/services_oc3.asp