Petabyte Storage: Is it Necessary for the Everyday User?

  • Thread starter MostlyHarmless
  • Start date
  • Tags
    Storage
In summary, at the moment, it seems that a Petabyte of storage on a single drive seems both out of reach, and unnecessary. It's worth noting that this is generally the mindset we have before each new era in technology, but I'm optimistic that the next storage medium break-through will be Petabytes.
  • #1
MostlyHarmless
345
15
So, in the context of computing for the even avid techy, it seems like a Petabyte(1024 Terabytes) of storage on a single drive seems both out of reach, and unnecessary. I mean, on my computer, I've got 1080p (legal) downloads of movies, several very large games, and a comfortable amount of music, and I'm sitting pretty with my 500gb HDD. Of course, plenty of people use much more than me, 3, 4, 5TBs?

And obviously, this is generally the mindset we have before each new era in technology. (I'm reminded of a story of a guy who had 1TB of some questionable media, and 5 years ago I thought that was an absolutely absurd, almost comical amount of data.)

But now it seems like we should be reaching a limit on the amount of data the above average user can actually use.

I've seen 4k UHD, and(to me) its barely differential-able from real life. And a movie in 4k UHD uncompressed, would require your computer to display ~1GB/s. (45MB/frame @ 24frames/s)(http://www.zdnet.com/why-4k-uhd-television-is-nothing-but-a-ces-wet-dream-7000009506/). The best most expensive solid state drive I could find has a read speed of about 1.8GB/s, and it has a 1TB capacity(a 2.5 hour movie according to the above link, is about~10tb) and costs about $1,200. So, (I could be wrong here) if you had a SSD with more common speeds(500-800Mb/s), your computer would actually have to buffer as if you were trying to watch a long cat video over 3g.

My point being, aside from ubiquitous 4k UHD being impractical under the current infrastructure, EVEN IF 4k UHD(which requires massive amounts of data, and data capabilities) became commonly used, a Petabyte STILL seems like overkill.

If you filled a Petabyte drive with nothing but movies in 4k, it would take like a week and a half of non-stop viewing just to watch it all. Assuming the hard drive was fast enough.

I have no doubts that we will see a Petabyte storage device eventually, but will just the above average user ever need that much storage?
 
Computer science news on Phys.org
  • #2
MostlyHarmless said:
... this is generally the mindset we have before each new era in technology ...

Seems to me you've answered your own question. We don't know why it will be common 10 years from now, but it probably will.
 
  • #3
I remember when the first 5Mb HD hit the streets for Apples and PCs (Five whole megabytes on one disk!) The drive itself cost only $3500, more than the computer to which it would be attached. Who would need such a large amount of storage on one machine?

Well, the drives didn't stay at 5 Mb capacity for very long, and the price kept dropping, too. Now, it's not unusual to deal with a single data file 5 Mb in size, and applications often have several files this size in their installation.

I bought my first 1 Tbyte drive in 2008, and the only reason that you don't see many drives larger than this is due to the limitations of the operating systems which are used, and also other practical considerations. I recently did a CHKDSK /R on that same 1 Tbyte drive I bought in 2008 and it took the better part of 3 days' continuous running to go thru all 5 stages of this operation to completion, and the drive was about 75% used space.

IMO, I think the familiar electromechanical device known as the HDD has reached a practical limit on its maximum size. Reliable and convenient storage above a few terabytes will come in fully solid-state form, which has a ways to go before bending the price-capacity curve down far enough for those size drives to become common and practical. Right now, SSD units price out below $0.50/Gbyte, so prices of these must drop a little more for the SSD to become price-competitive with magnetic HDD.
 
  • #4
Right, like I mentioned, a 1TB SSD will cost nearly $1000, as opposed to a 1TB HDD @ about $75. Computing power right now isn't ready for data on the order of a Petabyte. I feel like we are still another storage medium break-through away from Petabytes.
 
  • #5
MostlyHarmless said:
Right, like I mentioned, a 1TB SSD will cost nearly $1000, as opposed to a 1TB HDD @ about $75. Computing power right now isn't ready for data on the order of a Petabyte. I feel like we are still another storage medium break-through away from Petabytes.


Price Drop Alert:

http://www.newegg.com/Product/Produ...&IsPowerSearch=1&cm_sp=SSD-_-VisNav-_-UpTo1TB

http://www.newegg.com/Product/Product.aspx?Item=9SIA29P1EC5324

A Samsung 840 EVO 1TB SSD is now either $499.99 or $439.99 (there seems to be some confusion of prices at NewEgg.)
 
  • #6
SteamKing said:
Price Drop Alert:
Even more amazing, the price in GBP on amazon.com is about the same as that newegg USD price.

Usually computer hardware and software vendors don't seem to have got their heads around the concept of exchange rates. They just assume £1 = $1.
 
  • #7
I'm reading about this "Hyper-CD", seems like there is some argument towards its actual existence. It seems obvious that if it were really what I'm reading it is, we'd all have one. So I'm inclined to think that it's not what it has been made out to be.

Another thought, what about RAM? I'm not 100% how it differs from your main memory. But is it possible that RAM could be used as mass storage?

http://www.amd.com/en-us/products/memory/ramdisk
 
Last edited:
  • #8
MostlyHarmless said:
It seems obvious that if it were really what I'm reading it is, we'd all have one.
Not necessarily. Suppose the disks only cost $1 each, but the disk drive costs $100,000 and needs to be installed in a clean room environment with a "floating floor" to eliminate all vibrations - costing say another $100,000 to build.

Google would probably be in the market for them to use in its data centers. Home users, not so much.

Remember Seymour Cray's definition of a supercomputer, back in the 1980s: the biggest and fastest machine you can build for 20 million dollars. He could easily have build an even bigger amd faster machine for 50 or 100 million dollars, but that was outside what his customers (national research labs and multinational companies) were prepared to pay.
 
Last edited:
  • #9
AlephZero said:
Even more amazing, the price in GBP on amazon.com is about the same as that newegg USD price.

Usually computer hardware and software vendors don't seem to have got their heads around the concept of exchange rates. They just assume £1 = $1.

That may not necessarily be true. I don't know what sort of taxes are charged in GB or what sort of import duties may be assessed on these drives, which, AFAIK, are not produced in GB or the EU.

Edit: I forgot to mention one tax with which the US hasn't had to deal with (yet), which is quite common in GB and the EU - the Value Added Tax, or VAT, which is folded into the price of a good, material, or service, at different steps along the way from the item's origin to the ultimate consumer of the item.

http://en.wikipedia.org/wiki/Value-added_tax

In the EU, the VAT can range between 15% to 27%, depending of what is being taxed, and it is not always clear from the purchase price how much total VAT is included.
 
Last edited:
  • #10
MostlyHarmless said:
Another thought, what about RAM? I'm not 100% how it differs from your main memory. But is it possible that RAM could be used as mass storage?

http://www.amd.com/en-us/products/memory/ramdisk

RAM is your main memory. A RAMdisk sets aside a portion of your main memory to simulate the presence of another disk device.

http://en.wikipedia.org/wiki/RAM_drive

Because accessing RAM is so much faster than accessing a HDD, you would load a very disk-intensive piece of software in the RAMdisk to reduce the processing time of the software.
 
  • #11
SteamKing said:
That may not necessarily be true. I don't know what sort of taxes are charged in GB or what sort of import duties may be assessed on these drives, which, AFAIK, are not produced in GB or the EU.

E-commerce sites that sell direct to the public in the UK (like Amazon) include all taxes (including VAT) in the quoted price.

If you are registered for VAT (which would normally mean you are running a business with a turnover approaching £100,000 p.a) you can then reclaim the tax when you resell the device (e.g. if you sell a complete computer system including the SSD drive)

Of course if your buy something like this direct from a foreign supplier, you might have to make your own arrangements to pay import duty etc. Or the supplier might add those costs to the shipping charges - most international courier services will do the paperwork for you, for a small fee.
 
  • #12
MostlyHarmless said:
So, in the context of computing for the even avid techy, it seems like a Petabyte(1024 Terabytes) of storage on a single drive seems both out of reach, and unnecessary.
The computer itself is unnecessary. But will a petabyte be wanted? Absolutely!

When I was in college (40+ years ago), a single 10 MBytes unit (resembling a washing machine) was enough disk storage for all the thousand plus students taking computer courses there.

As time goes on, software will become more and more inefficient and memory will be put to uses that are unimaginable today.

Memory devices are like baseball fields: if you build, they will store.
 
  • #13
When it comes to computing technology, MORE is always better.

Remember that Bill Gates quote? "640 k ought to be enough for anybody." - Bill Gates, 1981

Lol. We have a responsibility to keep Moore's law alive. We need the the equivalent of another revolution like perpendicular storage gave us.

Again, when it comes to computing technology, greed is good:biggrin:
 
  • #14
I remember people asking me why I needed a 120gb drive back in 2000/2001, a petabyte will become the norm in the next decade or 2.

Software doesn't become more inefficient, rather our needs and wants require more bloating (uncompressed audio, video, high resolution imagery, etc. are becoming more and more common, especially in businesses that use topological databases and imagery. At work, our sans is just over 60 TB, and is 80% filled with high res topological images, and this is for an area of only 3 square miles with maybe 10 engineers, if that (probably more like 4 or 6 actual engineers spanning across water infrastructure and electrical infrastructure).

IMO, the petabyte can't get here fast enough :p
 
  • #15
SteamKing said:
RAM is your main memory. A RAMdisk sets aside a portion of your main memory to simulate the presence of another disk device.

http://en.wikipedia.org/wiki/RAM_drive

Because accessing RAM is so much faster than accessing a HDD, you would load a very disk-intensive piece of software in the RAMdisk to reduce the processing time of the software.

As even the link you put in points out, that's true only if you are talking about a SOFTWARE RAM drive, and there are also hardware RAM drives that are not part of main memory, they just use the same general type of chips.
 
  • #16
phinds said:
As even the link you put in points out, that's true only if you are talking about a SOFTWARE RAM drive, and there are also hardware RAM drives that are not part of main memory, they just use the same general type of chips.

The point was, MostlyHarmless was confused about RAM and 'main memory', which he thought were two different things. Details like a soft or hard RAMdisk would have just confused him further.
 
  • #17
SteamKing said:
The point was, MostlyHarmless was confused about RAM and 'main memory', which he thought were two different things. Details like a soft or hard RAMdisk would have just confused him further.

a reasonable point.
 
  • #18
elusiveshame said:
Software doesn't become more inefficient, rather our needs and wants require more bloating

These are two different and mostly unconnected issues. Sofware DOES become more inefficient, and bloated, but I agree w/ you that that is not at all the main reason why we need more storage, which IS because our "wants" require more storage.

Inefficient software is mostly not even an issue in ANY sense these days because computers are so fast that unless you are running nested loops on zillions of records it is more cost efficient to write code quickly and with ease of debugging than it is to write it so that is efficient in speed or memory requirements.
 
  • #19
phinds said:
These are two different and mostly unconnected issues. Sofware DOES become more inefficient, and bloated, but I agree w/ you that that is not at all the main reason why we need more storage, which IS because our "wants" require more storage.

Inefficient software is mostly not even an issue in ANY sense these days because computers are so fast that unless you are running nested loops on zillions of records it is more cost efficient to write code quickly and with ease of debugging than it is to write it so that is efficient in speed or memory requirements.

I can agree with that, though I don't think it would necessarily be cost effective, especially if optimizing messy code would prevent software trouble support calls. I guess it depends where a company is in its life and if the estimated man hours for tech support is less than fixing sloppy coding.
 
  • #20
elusiveshame said:
I can agree with that, though I don't think it would necessarily be cost effective, especially if optimizing messy code would prevent software trouble support calls. I guess it depends where a company is in its life and if the estimated man hours for tech support is less than fixing sloppy coding.

Did you not notice my statement about ease of debugging?
 
  • #21
phinds said:
Did you not notice my statement about ease of debugging?

But not everything is easy to debug :p

For a simplified answer, you're right, but I'd argue there are many more factors than discussed in this thread that needs to be accounted for overall.
 
  • #22
When you describe a conflict between optimization and maintainability of the code, I understand what you are getting at. Sacrificing readability at the level of the language semantics will seldom buy you enough of an advantage to offset the maintenance problems you're creating for yourself.

However, if you compare coding from 30 or 40 years ago with coding today, you will discover incredible inefficiencies - often embedded in the operating system itself. For example, if you implement a "Hello World" as a dialog box in Windows 95 you're executable will be less than a tenth the size of the same program in Windows 8 and will invoke over ten times the OS resources.

If your program is small and resources are relatively plentiful, then there is nothing inefficient about using 90% of the available resources compared to 5%. Otherwise, it makes sense to think about the overall design of your application and what OS resources you're willing to invoke.

A real exercise in efficiency is fitting a word-processing/publishing system capable of handling 400-page documents with many embedded graphics into a machine with 64Kbytes of main memory and a 4MHz CPU clock frequency (early '80s). Or a class/student scheduling system with 4Kbytes of main memory, no hard drive, and three 9Khz 9-track tape drives (early '70s).
 
  • #23
.Scott said:
However, if you compare coding from 30 or 40 years ago with coding today, you will discover incredible inefficiencies - often embedded in the operating system itself. For example, if you implement a "Hello World" as a dialog box in Windows 95 you're executable will be less than a tenth the size of the same program in Windows 8

I believe that depends on the programming language used. If I were to write a hello world dialog in Visual Basic 6, and compile it on windows 95, 98, 2000, xp, and 7, it will compile to the same size and any OS API's used will use what's installed on the OS (provided that you're not using any deprecated functions). Compile size will be the same, but may cause a larger overhead for the pretty graphics VB uses for its form controls if they're updated and require more overhead (ram/CPU usage).

Now, if I were to compile using VB6 and then compiled it using VB.NET, the compiled executable will be different in size, but that's because it's using a different compiler and runtime libraries (vb.net should be smaller since it's piggybacking off the .NET framework).
 
  • #24
Yeah, I remember when a "hello world" written in ASM6 under DOS could be assembled into a .COM file that was about 15 bytes (basically the "hello world" string and a single system call).

Very nice but who wants to revert to DOS?
 
  • #25
MostlyHarmless said:
So, in the context of computing for the even avid techy, it seems like a Petabyte(1024 Terabytes) of storage on a single drive seems both out of reach, and unnecessary. I mean, on my computer, I've got 1080p (legal) downloads of movies, several very large games, and a comfortable amount of music, and I'm sitting pretty with my 500gb HDD. Of course, plenty of people use much more than me, 3, 4, 5TBs?

And obviously, this is generally the mindset we have before each new era in technology. (I'm reminded of a story of a guy who had 1TB of some questionable media, and 5 years ago I thought that was an absolutely absurd, almost comical amount of data.)

But now it seems like we should be reaching a limit on the amount of data the above average user can actually use.

I've seen 4k UHD, and(to me) its barely differential-able from real life. And a movie in 4k UHD uncompressed, would require your computer to display ~1GB/s. (45MB/frame @ 24frames/s)(http://www.zdnet.com/why-4k-uhd-television-is-nothing-but-a-ces-wet-dream-7000009506/). The best most expensive solid state drive I could find has a read speed of about 1.8GB/s, and it has a 1TB capacity(a 2.5 hour movie according to the above link, is about~10tb) and costs about $1,200. So, (I could be wrong here) if you had a SSD with more common speeds(500-800Mb/s), your computer would actually have to buffer as if you were trying to watch a long cat video over 3g.

My point being, aside from ubiquitous 4k UHD being impractical under the current infrastructure, EVEN IF 4k UHD(which requires massive amounts of data, and data capabilities) became commonly used, a Petabyte STILL seems like overkill.

If you filled a Petabyte drive with nothing but movies in 4k, it would take like a week and a half of non-stop viewing just to watch it all. Assuming the hard drive was fast enough.

I have no doubts that we will see a Petabyte storage device eventually, but will just the above average user ever need that much storage?
To infinity and beyond, as they say in star-trek... :biggrin:
 

1. What is a petabyte?

A petabyte is a unit of digital information that equals one quadrillion (1,000,000,000,000,000) bytes. It is often used to measure large amounts of data, such as in computer storage and transfer.

2. Why would an everyday user need petabyte storage?

For the average user, petabyte storage is not necessary. A petabyte is an incredibly large amount of data and most individuals do not have a need for that much storage space. However, for businesses and organizations that deal with large amounts of data, such as scientific research or media companies, petabyte storage may be necessary.

3. How much information can a petabyte hold?

A petabyte can hold approximately 1,000 terabytes or 1 million gigabytes of data. To put this into perspective, it is estimated that a petabyte could hold about 13.3 years worth of HD video.

4. Is petabyte storage expensive?

Yes, petabyte storage is expensive. The cost of petabyte storage depends on various factors such as the type of storage (cloud-based or physical), the speed and reliability of the storage, and the provider. However, as technology advances, the cost of petabyte storage is decreasing.

5. Is there a limit to how much data can be stored in a petabyte?

Technically, there is no limit to how much data can be stored in a petabyte. However, as data continues to grow and more storage is needed, it becomes more difficult and expensive to manage and maintain such large amounts of data. This is why petabyte storage is typically only used by large organizations with a significant need for data storage.

Similar threads

  • Biology and Medical
Replies
5
Views
2K
Back
Top