Thursday, August 27, 2009

graphs


p1 is holding instance of r1..
p1 may request instance of r2
p2 is holding the instance of r2
p2 is requesting instance of r1
there no dead lock becuse there is no cycle..




p2 is requesting instace of r1
p1 is holding the instance of r1
both p1 and p2 may request instance of r2
but there is no deadlock because there is no cycle



P2 and P3 has the instance of R1
P1 and P4 has the instance of R2
P1 and P3 has no interuption on cycle..
there for no deadlock




P1 and P2 is holding a instance of R2 then P1 request instance of R1 but P2 is holding the instance of R1 then P2 request isntance of R3 but P3 is holding the instance of R3 then P3 request instance of R2 but P1 and P2 has the instance of R2..
R4 is undistured
there is deadlock




P1 and P2 is holding a instance of R2 then P1 request instance of R1 but P2 is holding the instance of R1 then P2 request isntance of R3 but P3 is holding the instance of R3..
R4 is undistured
there is deadlock



Resource-Allocation Graph

A set of vertices V and a set of edges E.


 V is partitioned into two types:




  • P = {P1, P2, …, Pn}, the set consisting of all the processes inthe system.

  • R = {R1, R2, …, Rm}, the set consisting of all resource typesin the system.



 request edge – directed edge P1 → Rj




 assignment edge – directed edge Rj → Pi

 If graph contains no cycles  no deadlock.

 If graph contains a cycle 

  • ✦ if only one instance per resource type, then deadlock.
  • ✦ if several instances per resource type, possibility ofdeadlock.

Resource-Allocation Graph Algorithm

 Claim edge Pi → Rj indicated that process Pj may requestresource Rj; represented by a dashed line.

 Claim edge converts to request edge when a processrequests a resource.

 When a resource is released by a process, assignmentedge reconverts to a claim edge.

 Resources must be claimed a priori in the system.

Thursday, August 20, 2009

deadlock handler

There is a certain level of deadlock detection/avoidance built into the record locking facility. This deadlock handling provides the same level of protection granted by the /usr/group standard lockf call. This deadlock detection is only valid for processes that are locking files or records on a single system. Deadlocks can only potentially occur when the system is about to put a record locking system call to sleep. A search is made for constraint loops of processes that would cause the system call to sleep indefinitely. If such a situation is found, the locking system call will fail and set errno to the deadlock error number. If a process wishes to avoid the use of the system's deadlock detection, it should set its locks using F_SETLK instead of F_SETLKW.

deadlock detection

  • Allow system to enter deadlock state

  • Detection algorithm

  • Recovery scheme
  1. In the extreme case – every time a request for a resource cannot be granted immediately
  • This comes with enormous cost /think about complexity in real systems)

2. Reasonable alternative periodically. But what period to use?

  • How long are we willing to wait after it happens to be detected
  • How much system resources are we willing to commit to this detection algorithm

deadlock characterezation

Continuous mutual blocking of a “set of threads” eithercompeting for resources or interacting with each other (via synchronization, cooperation or communication)

  • Conflicting requests by at least 2 threads

No satisfying solution in all cases

Some standard OSes don‘t bother deadlocks at all (e.gSUN’s OS) arguing that deadlocks will occur scarcely

“a set of processes is in a deadlock state when every process in theset is waiting for an event that can be caused only by another process in the set”, Silberschatz et. al., “Operating Systems Concepts”

There is no efficient solution

deadlock prevention

Progressive deadlock Recovery
  • Robin hood approach – de-allocate resources from normal packets and assign to the recovery packet
  1. Remove any message from a deadlocked cycle
  2. Ensure its progress towards the destination

Regressive deadlock recovery

  • De-allocate resources from deadlocked packets
  • ypically destroy packets
  • Need some recovery mechanism, for example, end-to-end flow control

deadlock prevention

At least 1 necessary condition does not hold
  • Mutual Exclusion: not required for sharable resources; must hold for non-sharable resources.
  • Hold-and-Wait: can not request new when holding resources.
  1. Protocol 1: request all resources before it begins execution
  2. Protocol 2: request resources only when the process has none.
  3. Low resource utilization; starvation possible.
  • No Preemption: preempt resources from processes
  1. Protocol 1: If a request can not be satisfied then preempt all resources held and block
  2. Protocol 2: Preempt only if resources are needed by another running process
  3. Requires resource state to be easily restored
  • Circular Wait: Block any request that results in a cycle.
  1. Impose a total ordering of all resource types, and require that each process requests resources in an increasing order

Thursday, August 13, 2009

thread scheduling

Thread Scheduling


In our introduction to how threads work, we introduced the thread scheduler, part of the OS (usually) that is responsible for sharing the available CPUs out between the various threads. How exactly the scheduler works depends on the individual platform, but various modern operating systems (notably Windows and Linux) use largely similar techniques that we'll describe here. We'll also mention some key varitions between the platforms.


Note that we'll continue to talk about a single thread scheduler. On multiprocessor systems, there is generally some kind of scheduler per processor, which then need to be coordinated in some way. (On some systems, switching on different processors is staggered to avoid contention on shared scheduling tables.) Unless otherwise specified, we'll use the term thread scheduler to refer to this overall system of coordinated per-CPU schedulers.

Across platforms, thread scheduling1 tends to be based on at least the following criteria:

  • a priority, or in fact usually multiple "priority" settings that we'll discuss below;
  • a quantum, or number of allocated timeslices of CPU, which essentially determines the amount of CPU time a thread is allotted before it is forced to yield the CPU to another thread of the same or lower priority (the system will keep track of the remaining quantum at any given time, plus its default quantum, which could depend on thread type and/or system configuration);
  • a state, notably "runnable" vs "waiting";
  • metrics about the behaviour of threads, such as recent CPU usage or the time since it last ran (i.e. had a share of CPU), or the fact that it has "just received an event it was waiting for".

Most systems use what we might dub priority-based round-robin scheduling to some extent. The general principles are:

  • a thread of higher priority (which is a function of base and local priorities) will preempt a thread of lower priority;
  • otherwise, threads of equal priority will essentially take turns at getting an allocated slice or quantum of CPU;
  • there are a few extra "tweaks" to make things work.

States

Depending on the system, there are various states that a thread can be in. Probably the two most interesting are:

  • runnable, which essentially means "ready to consume CPU"; being runnable is generally the minimum requirement for a thread to actually be scheduled on to a CPU;
  • waiting, meaning that the thread currently cannot continue as it is waiting for a resource such as a lock or I/O, for memory to be paged in, for a signal from another thread, or simply for a period of time to elapse (sleep).

Other states include terminated, which means the thread's code has finished running but not all of the thread's resources have been cleared up, and a new state, in which the thread has been created, but not all resources necessary for it to be runnable have been created. Internally, the OS may distinguish between various different types of wait states2 (for example "waiting for a signal" vs "waiting for the stack to be paged in"), but this level of granularity is generally not available or so important to Java programs. (On the other hand, Java generally exposes to the programmer things the JVM can reasonly know about, for example, if a thread is waiting to acquire the lock on a Java object— roughly speaking, "entering a synchronized block".)

summary:


  • •Executes separate from the rest of the process

  • •An application can be a set of threads that cooperate and execute concurrently in the same address space

real-time scheduling

Real-Time Scheduling
‹ Two types of real-time
  • Hard deadline
• Must meet, otherwise can cause fatal error
  • Soft Deadline

• Meet most of the time, but not mandatory

‹ Admission control

  • Take a real-time process only if the system can g“real-time” behavior of all processes

multiprocessor scheduling

Multiprocessor/Cluster Scheduling
‹ Design issue
  • Process/thread to processor assignment
‹ Gang scheduling (co-scheduling)
  • Threads of the same process will run together (on a multiprocessor)
  • Processes of the same application run together (on a cluster)
‹ Dedicated processor assignment
  • Threads will be running to specific processors to completion
  • Is this a good idea?

Monday, August 10, 2009

Scheduling Algorithms

  • First-come, First-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.
  • Shortest-job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority-scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation.
  • Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive.

The FCFS algorithm is nonpreemptive, the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive.


  • Multilevel queue algorithms allow different algorithms to be used for various classes of processes. The most common is a foreground interactive queue which uses RR scheduling, and a background batch queue, which uses FCFS scheduling.
  • Multilevel feedback queues allow processes to move from one queue to another.

Because such a wide variety of scheduling algorithms are available, we need methods to select among them. Analytic methods use mathematical analysis to determine the performance of an algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a “representative” sample of processes, and computing the resulting performance.

Operating Systems supporting threads at the kernel level must schedule threads - not processes - for execution. This is the case with Solaris 2 and Windows 2000 where both systems schedule threads using preemptive priority based on scheduling algorithm including support for real-time threads. The Linux process scheduler also uses a priority-based algorithm with real-time supports as well. The scheduling algorithms for these three operating systems typically favor interactive over batch and CPU-bound processes.systems typically favor interactive over batch and CPU-bound processes.