Deadlock Prevention Homework: Is a Deadlock Possible?

  • Thread starter twoski
  • Start date
In summary, the code provided does not seem to create a possible deadlock. Deadlocks can only occur if there are two threads that try to access the same resource at the same time. This will be prevented by the monitor which will free a resource and break the cycle.
  • #1
twoski
181
2

Homework Statement



Here is the relevant info from the assignment handout:

http://pastebin.com/nfXcF88u

The Attempt at a Solution



I have written the code, however i do not see how deadlocks can even be possible with this assignment. Since there is a monitor thread which automatically frees resources after their usage time is ended, there is no way a deadlock could occur. I thought one of the conditions of deadlock was that a thread would not give up the resource it has until it is able to access the resource it wants. In this assignment, the monitor will force threads to release resources so even if there exists a cycle, it will be broken by the monitor thread very quickly.

It seems like the Banker's Algorithm would be overkill for this assignment. Can someone explain how exactly a deadlock would be possible?

Would it not be the most efficient to just let threads access the resources they need (unless of course there are no instances of a needed resource) until they are all done?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Unfortunately, I cannot access pastebin.com.

If there are two threads and each can claim the same two resources and they might claim them in reverse order, then you have the potential for a deadlock.

When you say "access the resource they need", I hope you mean once they have successfully claimed the resource. Deadlocks aside, two threads should not attempt to access the same resource at the same time.
 
  • #3
In this assignment, threads 'claim' resources. A resource cannot be accessed by more than 1 thread at a time.

The issue is, even if a deadlock were to occur, the monitor would free a resource and break the loop.

We have learned in class that there are 4 conditions that must hold in order for deadlocks to actually "happen". One of those conditions is "hold and wait", ie. a process holding at least one resource is waiting to acquire additional resources held by other processes. Since the monitor makes it impossible for threads to hold a resource indefinitely, there technically can't be deadlocks, right?
 
  • #4
I'm not sure what you mean by "breaking the loop". Do you mean that the threads sit in a wait loop for resources?

How does the monitor keep a thread from holding a resource too long? Does it simply kill that thread?
 
  • #5
When i say loop i mean a deadlock cycle, ie.

LJpC16G.png


This deadlock cycle can occur with the code i have, however since there is a monitor thread which frees resources from processes, the deadlock will be undone.

If the monitor did not exist, then the deadlocks would be permanent.
 
  • #6
A monitoring function that simply frees the resources after a time limit is a pretty useless monitor. What happens to the thread that is still using that resource? Let's say it's a block of memory. What happens when the thread tries to write to that block? Does it succeed - which means that it could interfere with other threads? Or does if fail, perhaps generating an exception, which that thread would have to deal with?

More importantly, how does that thread know that it has lost its first resource if it isn't accessing it any more because it is trying to get a second resource?

Is there some other way I could see this assignment? Pastebin.com is blocked from where I am now.
 
  • #7
Here it is

Description: This assignment looks at the issue of deadlocks. In short, your job will be to simulate a running OS, in which multiples processes/or threads make various requests for resources. The resources are shared and, depending on the sequence of requests and the time that each is held, deadlocks may occur. Your job is to prevent such deadlocks from occurring so that each process can eventually finish.

The simulation is fairly easy to describe. The main program will take 3 command line arguments:
1. Thread count: The number of cooperating processes/threads
2. Resource count: The number of shared resources (e.g., a shared data structure)
3. Plan Count: The total number of resources that need to be acquired by a running process during its lifetime

The resources themselves are virtual, in the sense that you don’t actually have to create a data structure to be shared. Instead, you just need to provide an ID for each resource (1, 2, …n), along with a count of the number of instances of each such resources that are available. For example, there may be three instances of Resource 1, two instances of Resource 2, etc. The number of instances of each resource will be a randomly generated number between 1 and 3 (i.e., each resource can have a different instance count in this range).
The plan count represents a list of resources that a given thread must acquire during its lifetime. All threads will use a plan count of the same size. Each plan element will be associated with three pieces of information. First, each element will identify a resource. You will identify the resources by randomly picking a resource ID in the appropriate range (e.g., if the resource count is 5, then the ID can be between 1 and 5).

The second item of info is the amount of time the thread will wait before trying to acquire this resource. In practice, this is the number of milliseconds that the current thread must pause before asking for this resource. The pause time will be a randomly generated number in the range 100 to 500 (1/10 to ½ of a second). The randomness will simulate a running system in that the results will be different each time you run the program.

The third item is the amount of time that the thread will hold the requested resource, if the resource is granted. This will also be a random time period in the range of 1/10 to ½ second.
So each thread will have a list or array of these plan objects. Graphically, this might look like this:

Thread 1: [R3, 126, 419], [R2, 333, 54], …[R3, 432, 456]
Thread 2: [R1, 222, 123], [R1, 299, 231], …[R2, 243, 209]


The idea then is to start a group of threads, run through the plan lists, and avoid deadlock if acquiring the next resource would create a problem. If it is determined that a deadlock would occur, then the current request must be denied. In this case, the thread making the request must wait for a random amount of time (again, between 1/10 to ½ of a second) and try again. This process would continue until the resource is granted. This should always happen eventually if the deadlock algorithm is working properly.
One important point that might not be obvious is that you need a mechanism for releasing resources once their usage period is finished. To do this, you will need a monitor thread that will run every 50 milliseconds. Its job is to run through the list of acquired resources and release any that are finished. To do this, you will want to store the time when a request is first granted, along with the time required for this resource. When the monitor runs, it can simply check to see if the required usage period is now finished.

Sample output:

** Deadlock Detection System **
Thread Count: 10
Resource Count: 5
Plan Count: 20
Resource counts:
R1: 3
R2: 2

Plan Lists
Thread 1: [R4][R4][R3]…[R1]
Thread 2: [R2][R4][R2]…[R4]

Request [Thread 2]: R2 -> granted
Request [Thread 2]: R4 -> granted
Request [Thread 2]: R2 -> granted
Request [Thread 1]: R4 -> granted
Request [Thread 1]: R4 -> rejected, deadlock prevented
Request [Thread 3]: R5 -> granted
Request [Thread 3]: R2 -> granted
Request [Thread 3]: R4 -> rejected, deadlock prevented
R4 instance [Thread 2] released
Request [Thread 1]: R4 -> granted on retry

Thread 3 has completed its plan list
Thread 1 has completed its plan list

All process have finished without deadlocking!

That’s it. Make sure you play around with various parameter combinations to ensure that things work in any situation. Note, for example, how Thread 1 is rejected above on its first attempt to get its second instance of R4, then goes back to sleep and later wakes up to make a successful retry on the same resource. It works in the second case because an instance of R4 has just been released by Thread 2.

Currently my code works but i have a feeling I'm not actually doing deadlock prevention the way they want us to. I don't even have any deadlock prevention in my code since it never gets stuck.
 
Last edited:
  • #8
You need a new instructor.
Not only is that assignment broken, the notes that say "rejected, deadlock prevented" are misplaced and make me wonder if your prof understands the subject matter. What they should say is simply "rejected, resource unavailable" - which is almost the opposite.

What I would do is change the OS so that time elapses against a resource only when the thread is not waiting.
 
  • #9
I'd like to make a lot of changes to the assignment however i doubt they would take kindly to that.

Given the description of the assignment, my best assumption of what they mean by "deadlock prevented" is that there were no resources available to the thread when it tries to acquire one, so it instead waits and tries again to acquire the resource once the monitor has freed one.

But the solution to this seemingly goes against the concept of deadlocks. There can't be a deadlock if there's an external monitor thread with the ability to free resources from all of the threads.
 
Last edited:
  • #10
I can't find a way to interpret this assignment in a useful way. There is one term: "cooperating". In what way are these threads supposed to be cooperating? Perhaps that is the key.

In the example, it certainly looks as though they are granting the requests whenever the resource is available.

A null test case is one that you try for the purpose of demonstrating a failure - in this case a deadlock. As you have noted, in this assignment there is no null test case - you can't loose.

What you can do is demonstrate an algorithm for yourself using a model that does not advance the time - and therefore can be broken. Then submit that with the broken model (as assigned) - perhaps with a note indicating that it had been tested under conditions where the null test case failed.

I would also report two types of resource request rejection - one for the resource being unavailable, the other for the purpose of avoiding a deadlock.

--- edit ---

What level course is this? Is it a pretty difficult engineering college?
 
  • #11
This is a computer science course about operating systems, synchronization, parallelism, CPUs etc.

Since there can be pseudo-deadlocks, should i attempt to trace a circle from one thread back to itself when it tries to acquire a resource?

ie,

Thread1 has already acquired Resource1.
Thread2 has already acquired Resource2.
T1 acquires R2.
T2 tries to acquire R1 but since R1 -> T1 -> R2 -> T2 -> R1, it is disallowed, using the logic from that diagram i posted.
 
  • #12
When you say you want to "prevent deadlocks", you mean one of two things:
1) When a deadlock occurs, you don't want the system to hang - but it's okay if the application crashes.
2) You want to avoid a possible deadlock before it happens - by declining a claim.

The first case is pretty simple. All you have to do is detect a deadly embrace (each thread requesting the resource the other has), kill one of those two threads, and take its resources.

To do the second, you need to order the resources so that whenever a thread claims one resource, it also ties up all other resources that precede it in the order. If you know ahead of time which thread may need which resources, you can be loose with resources that are not potentially claimed by more than one thread.

For example:

Thread 1 will need A, C, E, and G
Thread 2 will need B, E, F, and G

Resources are ordered E, G.
Resources A, B, C, D, and F are unordered because they are uncontested.

Thread 1 claims A
Thread 2 claims F
Thread 2 claims G - this ties up E which precedes G in the order
Thread 1 attempts to claim E - but must wait.

The potential deadlock is thus avoided. Thread 2 is unencumbered and thread 1 has nothing that thread 2 might want. So eventually thread 2 will complete or reach another state that will allow E to be claimed by thread 1.

You can implement this, but the model provided in the exercise will not allow you to test it. To test it, you will need to change the model. After it is tested, you can change the model back - and, of course, it will still work, and it will meet the requirements of your assignment.
 

1. Can a deadlock actually occur in real life situations?

Yes, deadlocks can occur in real life situations where there are shared resources and multiple processes competing for them. For example, in a computer system, if two processes are waiting for each other to release a resource, a deadlock can occur.

2. How can we prevent deadlocks from happening?

Deadlocks can be prevented by using various strategies such as resource allocation, resource ordering, and deadlock detection and recovery. These strategies involve careful management of resources and ensuring that processes do not enter into a circular wait.

3. What is a circular wait and how does it lead to a deadlock?

Circular wait is a situation where two or more processes are waiting for a resource that is held by another process in the cycle. In this case, each process is waiting for a resource that is held by the next process in the cycle, leading to a deadlock.

4. Can a deadlock be resolved once it has occurred?

Yes, deadlocks can be resolved by using various techniques such as resource preemption, where a resource is forcefully taken from one process to be used by another, or by killing one of the processes involved in the deadlock. However, these techniques can lead to system instability and should be used with caution.

5. Is deadlock prevention always necessary?

No, deadlock prevention is not always necessary in every system. In some cases, the cost of implementing deadlock prevention strategies may outweigh the cost of dealing with occasional deadlocks. It is important to analyze the system and determine the level of risk and potential impact of deadlocks before deciding whether deadlock prevention is necessary.

Similar threads

Replies
1
Views
813
Replies
2
Views
629
  • Engineering and Comp Sci Homework Help
Replies
4
Views
2K
Replies
15
Views
1K
Replies
235
Views
16K
Replies
15
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
5
Views
2K
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
4K
Back
Top