What's the fastest way to increase file size?

So say, I'm writing a program with an infinite loop, and that I'm trying to write a file that is as large as the disk drive is, in the smallest possible time.

What would be the best algorithm to do it?

Clearly, such an algorithm would consume as many system resources as it possibly could. So it would be limited by the system resources, so we need not concern ourselves with factorials or exponentials.

fprintf(fp, "blahblah");

and blahblah would be "output text". Say, blahblah was a huge amount of text, and the loop was a for loop that outputted blahblah an infinite amount of times (it outputted as it went through the loop, so the loop doesn't need to finish for the file to be written). The question is - how much MB/second is usually consumed in the process? (and would there be a maximum value, given limited speed in file writing/reading?) I know that it's correlated with CPU speed. And I don't want to try it myself yet (to avoid stressing out the hard disk)- though it probably has been tried by people who forgot to close the loop up. Anyways, it is conceivable that the entire hard disk space could be eaten up in the matter of seconds? Given that it does take time to transfer system files, I don't think so.
 PhysOrg.com science news on PhysOrg.com >> Leading 3-D printer firms to merge in $403M deal (Update)>> LA to give every student an iPad;$30M order>> CIA faulted for choosing Amazon over IBM on cloud contract
 Recognitions: Science Advisor The fastest algorithm would be not to write any data at all. Just modify the file allocation table (or whatever it's called in your preferred operating system) to say all the free space on the disk belongs to a new file.

 Quote by Ara macao What would be the best algorithm to do it?
What AlephZero has said. The file system is different for different operating systems so the method to use depends on your OS. Using the Win32 API you could do this:

LONG sizeLow = <low 32 bits of a 64 bit size value>;
LONG sizeHigh = <high 32 bits of a 64 bit size value>;
SetFilePointer(fileHandle, sizeLow, &sizeHigh, FILE_BEGIN);
SetEndOfFile(fileHandle);

The size value is whatever you need it to be. If you want to use up the whole drive then query the file system to determine how much free space is available on the drive using the GetDiskFreeSpace function.

 Quote by Ara macao The question is - how much MB/second is usually consumed in the process? (and would there be a maximum value, given limited speed in file writing/reading?) I know that it's correlated with CPU speed.
If you used a loop instead of setting the file size, the program would do a lot of file access but still not use very much CPU time. The task is I/O intensive (disk access is relatively slow) but you should still be able to use your computer to perform other non-IO-intensive tasks. CPU speed is not so important since there is little computation involved. You will need to look at the IO specifications of your system to see how fast data can be written. Different architectures provide different results. For example, a SCSI drive will be much faster than an external USB drive.

Don't worry about "stressing out" the hard drive with your test. They are designed to access data at a specified rate and that's all your test would do. It's not like you testing your car at maximum speed. Of course if you fill up the drive used by your operating system, you may find out that the system becomes sluggish or unresponsive by lack of room to create temporary files and for tasks tasks that need disk access. Under Windows, you should probably test this with a secondary drive or partition instead of your C:\ drive.

What's the fastest way to increase file size?

I am having trouble envisioning an application for this that isn't malicious.
 The process is used to securely wipe out disk space. Previous file content is retrievable by checking unused disk sectors. It can be erased for good if you fill all available space with random data in a humongous file, then delete it.

 Quote by out of whack The process is used to securely wipe out disk space. Previous file content is retrievable by checking unused disk sectors. It can be erased for good if you fill all available space with random data in a humongous file, then delete it.
Not really.
Even if you overwrite the whole disk with random data it still can be retrieved.

By the way there are professional applications that already do this for you. but they are a bit more advanced than just overwriting everything with random numbers.

 Quote by MeJennifer Not really. Even if you overwrite the whole disk with random data it still can be retrieved.
There are different standards of security of course. Overwriting a sector with new data will indeed prevent normal users from accessing what was there before since the system will now read the new data instead. This is good enough for most people even though it will not stop more sophisticated inquisitors. The US DoD uses two methods. The simpler one has three passes: all 0s, all 1s and then random data. The second one has seven passes. Peter Gutmann presents a method in 27 passes. These are all aimed at making it increasingly unlikely that any data would be retrieved from magnetic media. The best method remains to incinerate the drive. But this is getting OTer and OTer.

 Quote by MeJennifer By the way there are professional applications that already do this for you. but they are a bit more advanced than just overwriting everything with random numbers.
Freeware is always nice. I run this one weekly on my business computer:

http://www.heidi.ie/eraser/

I set it for the 3-pass DoD method which is good enough for me. No state secret.

Recognitions:
 Quote by DaveC426913 I am having trouble envisioning an application for this that isn't malicious.
Sometimes it is useful to create a large file for later use with random access reads and writes. If you create it all in one go, it's more likely to be unfragmented. Also if you know what the file size you need when running on a multi-user system, it makes sense to allocate the resources you need up front, rather than run a long computation for 2 or 3 weeks and then have it fail near the end because the disk was full.

Re the security issue, this used to be a security hole in some operating systems (e.g. the original Cray supercomputer OS back in the 1980s). Cray put speed before everything, so they didn't wipe the disk when you created a large file. You could read whatever was left on the disk by earlier progams - not good.

A clever OS doesn't waste time overwriting the existing data either, it just remembers which disk sectors you haven't written yourself and returns zeroes if you try to read parts of the file before writing it. I've no idea if Windoze is that clever (and can't be bothered to find out).
 Thanks for the replies everyone! If old data can still be retrieved when overwritten (through whatever processes it takes), then is disk-writing an irreversible process? Which means that the more you write/delete off the disk, the more stressed out the hard disk will become? And do different types of disks have different levels of tolerance to writing/deleting? USB drives, hard drives, CD-RWs? I know that CD-RWs burn mini-holes into the disk, so it appears that CD-RW burning is an irreversible process. Yet it always has appeared as if you could bring the hard disk back to new if you format the disk. And what would be more stressful to disk? A huge 800 M file or a bunch of small files that add up to 800 MB? And are all files considered equal in the filesystem? (considering that it's just 0's and 1's?) Also, just out of curiosity - where is the registry info stored? What folder and what file?

 Quote by out of whack Overwriting a sector with new data will indeed prevent normal users from accessing what was there before since the system will now read the new data instead. This is good enough for most people even though it will not stop more sophisticated inquisitors. The US DoD uses two methods. The simpler one has three passes: all 0s, all 1s and then random data. The second one has seven passes. Peter Gutmann presents a method in 27 passes. These are all aimed at making it increasingly unlikely that any data would be retrieved from magnetic media. The best method remains to incinerate the drive. But this is getting OTer and OTer.
Hmm. I keep hearing on the net, over and over again, that one overwrite isn't good enough, that 'some trace' of your data remains, and that professional data recovery firms and/or the NSA can recover the files. As someone dipping their toe into information theory at the moment, this sounds a bit woo to me.

Is this actually TRUE? Has this data-recovery been demonstrated conclusively?

If it were, then really a 20GB hard drive is more of a 40GB or even 60GB hard drive! Albeit one that needs hi-tech equipment to access the extra info...

Here's a link that covers my concerns... I read a more thorough article last month but can't find it now.

http://www.actionfront.com/ts_dataremoval.aspx

Recognitions:
Homework Help
 Quote by out of whack SetFilePointer(fileHandle, sizeLow, &sizeHigh, FILE_BEGIN); SetEndOfFile(fileHandle);
Use these instead since they are more 64 bit pointer friendly (they use the LARGE_INTEGER union, which includes quad-word==64 bit values).

GetDiskFreeSpaceEx

SetFilePointerEx

Sequence:

CreateFile
GetDiskFreeSpaceEx
SetFilePointerEx
SetEndOfFile // this causes the space to be allocated and the cluster table updated
FlushFileBuffers // should cause a wait until the clusters are commited to the disk

SetFilePointerEx // set pointer back to start of file
WriteFile // write data in a loop
CloseFile

 transfer rates
The disk won't get filled up in just seconds. Streaming transfer rates on hard drives range from 30 mega bytes / second on inner cylinders to around 70 mega bytes / second on outer cylinders. Newer hard drives bump this to about 38 mB/s to 80 mB/s, while the Seagate Cheeta 15,000 rpm Scsi / Sas 15.5K family of drives set this range to 80mB/s to 140 mB/s. Assume average rate is 50mB/s, or 1GB/20 s, and a drive size of 250GB, then it takes 1 hour 23 minutes and 20 seconds to wipe a disk.

 recovering overwritten data
If this is done electrically, then bits are only read when transitions flow past a read head. Hard drives are already pushing the envelope bit density wise, so trying to read finer "sub-bits" wouldn't really be possible. However maybe something along the line of an electron microscope could "see" the bits, but I don't know about this.

 Similar discussions for: What's the fastest way to increase file size? Thread Forum Replies Programming & Comp Sci 5 Cosmology 11 Forum Feedback & Announcements 12 Biology 3 Computing & Technology 3