p1 is holding instance of r1..
p1 may request instance of r2
p2 is holding the instance of r2
p2 is requesting instance of r1
there no dead lock becuse there is no cycle..
request edge – directed edge P1 → Rj
assignment edge – directed edge Rj → Pi
If graph contains no cycles no deadlock.
If graph contains a cycle
Resource-Allocation Graph Algorithm
Claim edge Pi → Rj indicated that process Pj may requestresource Rj; represented by a dashed line.
Claim edge converts to request edge when a processrequests a resource.
When a resource is released by a process, assignmentedge reconverts to a claim edge.
Resources must be claimed a priori in the system.
2. Reasonable alternative periodically. But what period to use?
No satisfying solution in all cases
Some standard OSes don‘t bother deadlocks at all (e.gSUN’s OS) arguing that deadlocks will occur scarcely
“a set of processes is in a deadlock state when every process in theset is waiting for an event that can be caused only by another process in the set”, Silberschatz et. al., “Operating Systems Concepts”
There is no efficient solution
Regressive deadlock recovery
Thread Scheduling
In our introduction to how threads work, we introduced the thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.
Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.
Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:
States
Depending on the system, there are various states that a thread can be in. Probably the two most interesting are:
Other states include terminated, which means the thread's code has finished running but not all of the thread's resources have been cleared up, and a new state, in which the thread has been created, but not all resources necessary for it to be runnable have been created. Internally, the OS may distinguish between various different types of wait states2 (for example "waiting for a signal" vs "waiting for the stack to be paged in"), but this level of granularity is generally not available or so important to Java programs. (On the other hand, Java generally exposes to the programmer things the JVM can reasonly know about, for example, if a thread is waiting to acquire the lock on a Java object— roughly speaking, "entering a synchronized block".)
summary:
• Meet most of the time, but not mandatory
Admission control
The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.
Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.
subject