Memory management-worst fit algorithm

Click For Summary

Discussion Overview

The discussion revolves around the worst fit algorithm for memory management, particularly its advantages and disadvantages compared to first fit and best fit algorithms. Participants explore the implications of this algorithm on multiprogramming and memory allocation efficiency.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant notes that the worst fit algorithm allocates full memory to a single process, which limits multiprogramming capabilities.
  • Another participant discusses the trade-off between speed of allocation and the amount of memory allocated, suggesting that allocating all memory to one process can lead to inefficiencies.
  • The concept of bin packing is introduced as a way to analyze fragmentation in memory allocation, with the participant emphasizing the need for domain-specific knowledge in memory management.
  • One participant mentions the challenges posed by dynamic memory allocation and freeing, which complicates the memory management problem.
  • A later reply references the 4k page size in Wintel environments, suggesting that frequent allocation and freeing can lead to wasted space in partially filled pages, and mentions .NET's garbage collection as a potential solution.

Areas of Agreement / Disagreement

Participants express varying views on the implications of the worst fit algorithm, with no consensus reached on its advantages or overall effectiveness compared to other algorithms. The discussion remains unresolved regarding the optimal approach to memory management.

Contextual Notes

Participants highlight the complexity of memory management, including issues related to fragmentation, dynamic allocation, and the specific characteristics of different environments, such as Wintel systems.

prashantgolu
Messages
50
Reaction score
0
let us suppose we use the worst fit algorithm for allocating memory...initaially when whole of the memory is available...then it allocates full memory to one single process...hence no multiprogramming possible...hence what are the advantages of this algorithm...over first fit and best fit...
Thanks
 
Technology news on Phys.org
As you pointed out there is a trade-off with allocating full memory against fast allocation.

Typically we can use a bin analogy to demonstrate speed vs allocated memory. Ultimately the algorithm that allocates all the memory is one that has a region of RAM and allocates every byte to a particular memory handle for a process. Now this is insane because you would need to have other memory to say what process and handle is associated with each byte and all of that overhead would be ridiculous.

So the next thing to do is to say that we have so many bins and these bins hold all of the memory for a process. You might have more than one bin that is set aside for a process.

Now pretend your pieces of "garbage" (ie the chunks of memory allocated to the process) are to be packed into your bins. This is a well known problem that is called the N-D bin packing problem, where N in this case is 1.

From this you start to get an idea about how to analyze fragmentation.

Now on top of this you have to use domain specific knowledge about memory allocation. Although good programmers will write routines that don't abuse memory allocation where its not needed, there are cases where you will not know in the future how much memory you need.

So essentially you need to take into account not only the bin packing problem, but also the issue that memory will be freed and allocated dynamically and that adds a whole new dimension to your problem.

Thankfully since memory is plentiful (in comparison to the past), we can use fairly large bin sizes and many programs will allocate memory that is far less than the available memory (that is RAM - kernel memory - other allocated memory).

Also in the above case, you have enough bins that if a small chunk of memory is freed from one bin and a new allocation is bigger than what's available, then due to the amount of memory, you can easily use another bin.
 
  • Like
Likes   Reactions: Muthoni T
thanks...
 
I'm not sure about bin sizes, but in wintel environments, you're dealing with a 4k page size for virtual memory. These are allocated in groups, but if there is a lot of allocation and freeing of memory, you end up with a lot of empty space in partially filled 4k pages. .net has a garbage collection scheme to help with this.
 

Similar threads

  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
Replies
3
Views
2K
Replies
29
Views
6K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
1
Views
2K
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K