Loading .nii images while saving memory

  • Context: Python 
  • Thread starter Thread starter BRN
  • Start date Start date
  • Tags Tags
    Images Memory
Click For Summary
SUMMARY

The discussion focuses on optimizing memory usage while loading .nii images for a Deep Learning project utilizing the DCGAN algorithm. The user faces limitations with only 4GB of RAM and seeks alternatives to load large datasets (3GB, 5GB, and 8GB) without exceeding memory capacity. Suggestions include implementing an iterator for on-demand loading of images and adjusting the batch size from 64 to 30. The user also inquires about integrating this iterator into the loss function of the GAN discriminator.

PREREQUISITES
  • Understanding of the DCGAN algorithm and its architecture
  • Familiarity with TensorFlow and Keras for model development
  • Knowledge of .nii file format and its usage in medical imaging
  • Experience with Python programming and data handling techniques
NEXT STEPS
  • Research "TensorFlow data loading with tf.data.Dataset" for efficient data handling
  • Explore "Keras ImageDataGenerator" for real-time data augmentation
  • Learn about "Python generators" for implementing custom data loading iterators
  • Investigate "Memory-efficient training techniques for GANs" to optimize resource usage
USEFUL FOR

Data scientists, machine learning engineers, and researchers working on deep learning projects involving large medical imaging datasets, particularly those constrained by hardware limitations.

BRN
Messages
107
Reaction score
10
Hello everybody,
for my Deep Learning exam, I have to develop a project that provides for the generation of 3D images in format .nii using the DCGAN algorithm trained with some real images of MRI scans of brains of patients with Alzahimer.

I have a serious problem. I should load three different datasets that weigh 3GB, 5GB and 8GB respectively. Unfortunately I am forced to use an old PC with only 4GB of RAM and 2GB of Swap memory, so I am impossible to upload the files at one time using this simple code:

[CODE lang="python" title="loading files"]
train_data = []
data_path = './data/ADNI_test/'
for i in range(len(filenames)):
mri_file = data_path + filenames
train_data.append(nib.load(mri_file).get_fdata())
[/CODE]

Would any of you know how to give me some alternative solution? Is there a way to upload the files a little at a time without overloading memory? In DCGAN algorithm the batch size is set to 64 files, but I will certainly have to decrease to 30.

Thank you!
 
Technology news on Phys.org
I wouldn't have thought it would be necessary to have all the image files in memory; can you provide the ML engine with an iterator that loads each file on demand?
 
  • Like
Likes   Reactions: BRN
I hope it can be done, but at the moment I don't know how.
Should the iterator be implemented directly in the loss function of the GAN discriminator?

[CODE lang="python" title="discriminator"]
def discriminator_model(strides, kernel_size, input_img, weight_initializer, downsample_layers):
rate = 0.2
filters = input_img.shape[1]

model = Sequential()

model.add(tf.keras.layers.Conv3D(strides, kernel_size, filters, input_img.shape, padding = 'same',
kernel_initializer = weight_initializer))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(rate = rate))

for l in range(downsample_layers - 1):
filters = int(filters * 2)
model.add(tf.keras.layers.Conv3D(strides, kernel_size, filters, padding = 'same'))
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Dropout(rate = rate))

model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
return model

def discriminator_loss(real_output, fake_output):
real_loss = cross_entropy(tf.ones_like(real_output), real_output)
fake_loss = cross_entropy(tf.zeros_like(fake_output), fake_output)
total_loss = real_loss + fake_loss
return total_loss
[/CODE]

where 'real_output' would be the real image that is compared with that artificially generated.

Some idea?
 

Similar threads

Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
5K