Description: This assignment looks at the issue of deadlocks. In short, your job will be to simulate a running OS, in which multiples processes/or threads make various requests for resources. The resources are shared and, depending on the sequence of requests and the time that each is held, deadlocks may occur. Your job is to prevent such deadlocks from occurring so that each process can eventually finish.
The simulation is fairly easy to describe. The main program will take 3 command line arguments:
1. Thread count: The number of cooperating processes/threads
2. Resource count: The number of shared resources (e.g., a shared data structure)
3. Plan Count: The total number of resources that need to be acquired by a running process during its lifetime
The resources themselves are virtual, in the sense that you don’t actually have to create a data structure to be shared. Instead, you just need to provide an ID for each resource (1, 2, …n), along with a count of the number of instances of each such resources that are available. For example, there may be three instances of Resource 1, two instances of Resource 2, etc. The number of instances of each resource will be a randomly generated number between 1 and 3 (i.e., each resource can have a different instance count in this range).
The plan count represents a list of resources that a given thread must acquire during its lifetime. All threads will use a plan count of the same size. Each plan element will be associated with three pieces of information. First, each element will identify a resource. You will identify the resources by randomly picking a resource ID in the appropriate range (e.g., if the resource count is 5, then the ID can be between 1 and 5).
The second item of info is the amount of time the thread will wait before trying to acquire this resource. In practice, this is the number of milliseconds that the current thread must pause before asking for this resource. The pause time will be a randomly generated number in the range 100 to 500 (1/10 to ½ of a second). The randomness will simulate a running system in that the results will be different each time you run the program.
The third item is the amount of time that the thread will hold the requested resource, if the resource is granted. This will also be a random time period in the range of 1/10 to ½ second.
So each thread will have a list or array of these plan objects. Graphically, this might look like this:
Thread 1: [R3, 126, 419], [R2, 333, 54], …[R3, 432, 456]
Thread 2: [R1, 222, 123], [R1, 299, 231], …[R2, 243, 209]
…
The idea then is to start a group of threads, run through the plan lists, and avoid deadlock if acquiring the next resource would create a problem. If it is determined that a deadlock would occur, then the current request must be denied. In this case, the thread making the request must wait for a random amount of time (again, between 1/10 to ½ of a second) and try again. This process would continue until the resource is granted. This should always happen eventually if the deadlock algorithm is working properly.
One important point that might not be obvious is that you need a mechanism for releasing resources once their usage period is finished. To do this, you will need a monitor thread that will run every 50 milliseconds. Its job is to run through the list of acquired resources and release any that are finished. To do this, you will want to store the time when a request is first granted, along with the time required for this resource. When the monitor runs, it can simply check to see if the required usage period is now finished.
Sample output:
** Deadlock Detection System **
Thread Count: 10
Resource Count: 5
Plan Count: 20
Resource counts:
R1: 3
R2: 2
…
Plan Lists
Thread 1: [R4][R4][R3]…[R1]
Thread 2: [R2][R4][R2]…[R4]
…
Request [Thread 2]: R2 -> granted
Request [Thread 2]: R4 -> granted
Request [Thread 2]: R2 -> granted
Request [Thread 1]: R4 -> granted
Request [Thread 1]: R4 -> rejected, deadlock prevented
Request [Thread 3]: R5 -> granted
Request [Thread 3]: R2 -> granted
Request [Thread 3]: R4 -> rejected, deadlock prevented
R4 instance [Thread 2] released
Request [Thread 1]: R4 -> granted on retry
…
Thread 3 has completed its plan list
Thread 1 has completed its plan list
…
All process have finished without deadlocking!
That’s it. Make sure you play around with various parameter combinations to ensure that things work in any situation. Note, for example, how Thread 1 is rejected above on its first attempt to get its second instance of R4, then goes back to sleep and later wakes up to make a successful retry on the same resource. It works in the second case because an instance of R4 has just been released by Thread 2.