Priority inversion

Priority inversion

In scheduling, priority inversion is the scenario where a low priority task holds a shared resource that is required by a high priority task. This causes the execution of the high priority task to be blocked until the low priority task has released the resource, effectively "inverting" the relative priorities of the two tasks. If some other medium priority task, one that does not depend on the shared resource, attempts to run in the interim, it will take precedence over both the low priority task "and" the high priority task.

In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watch dog timer resetting the entire system. The trouble experienced by the Mars lander "Mars Pathfinder" [ [http://research.microsoft.com/~mbj/Mars_Pathfinder/Authoritative_Account.html What Really Happened on Mars] by Glenn Reeves of the JPL Pathfinder team] [ [http://research.microsoft.com/~mbj/Mars_Pathfinder/ Explanation of priority inversion problem experienced by Mars Pathfinder] ] is a classic example of problems caused by priority inversion in realtime systems.

Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to strict time constraints—it may be providing data to an interactive user, or acting subject to realtime response guarantees. Because priority inversion results in the execution of the low priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees.

olutions

The existence of this problem has been known since the 1970s, but there is no fool-proof method to predict the situation. There are however many existing solutions, of which the most common ones are:

;Disabling all interrupts to protect critical sections:When disabled interrupts are used to prevent priority inversion, there are only two priorities: "preemptible", and "interrupts disabled." With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. A simple variation, "single shared-flag locking" is used on some systems with multiple CPUs. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded systems, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief, under 100 microseconds in practical systems. Many software engineers consider them impractical in general-purpose computers.:Arguably, these methods are similar to priority ceilings.

;A priority ceiling:With priority ceilings, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that try to access the mutex does not have a priority higher than the ceiling priority.

;Priority inheritance:Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby effectively the waiting high priority task as well.

ee also

*Starvation
*Pre-emptive multitasking
*Lock-free and wait-free algorithms
*Non-blocking synchronization
*Read-copy-update
*Priority inheritance
*Priority ceiling
*Nice (Unix)

Notes

References

* [http://portal.acm.org/citation.cfm?id=358824 "Experience with Processes and Monitors in Mesa"] by Butler W. Lampson and David D. Redell, "CACM" 23(2):105-117 (Feb 1980) - One of the first (if not the) first papers to point out the priority inversion problem. Also suggested disabling interrupts and the priority ceiling protocol as solutions, noting that the former of these two cannot not tolerate page faults while in use.

External links

* [http://wombat.doc.ic.ac.uk/foldoc/foldoc.cgi?priority+inversion Description from FOLDOC]
* [http://citeseer.org/cs?q=priority+inversion Citations from CiteSeer]
* [http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=626613 IEEE Priority Inheritance Paper by Sha, Rajkumar, Lehoczky]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем решить контрольную работу

Look at other dictionaries:

  • Priority inversion — Dieser Artikel oder Abschnitt ist nicht hinreichend mit Belegen (Literatur, Webseiten oder Einzelnachweisen) versehen. Die fraglichen Angaben werden daher möglicherweise demnächst gelöscht. Hilf Wikipedia, indem du die Angaben recherchierst und… …   Deutsch Wikipedia

  • Priority inheritance — In real time computing, priority inheritance is a method for eliminating priority inversion problems. Using this programming method, a process scheduling algorithm will increase the priority of a process to the maximum priority of any process… …   Wikipedia

  • Priority ceiling protocol — In real time computing, the priority ceiling protocol is a synchronization protocol for shared resources to avoid unbounded priority inversion and mutual deadlock due to wrong nesting of critical sections. In this protocol each resource is… …   Wikipedia

  • Rate-monotonic scheduling — In computer science, rate monotonic scheduling [citation|first1=C. L.|last1=Liu|authorlink1=Chung Laung Liu|first2=J.|last2=Layland|title=Scheduling algorithms for multiprogramming in a hard real time environment|journal=Journal of the ACM|volume …   Wikipedia

  • Non-blocking synchronization — In computer science, non blocking synchronization ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. Literature up to the turn of the century used non blocking synonymously …   Wikipedia

  • Real-time operating system — A real time operating system (RTOS; generally pronounced as are toss ) is a multitasking operating system intended for real time applications. Such applications include embedded systems (programmable thermostats, household appliance controllers,… …   Wikipedia

  • Inverse — or inversion may refer to:* Inverse (program), a program for solving inverse and optimization problems * Inversion (music) * Inversion (prosody), the reversal of the order of a foot s elements * Inversion (linguistics) * Inversion (law),… …   Wikipedia

  • Non-blocking algorithm — In computer science, a non blocking algorithm ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. A non blocking algorithm is lock free if there is guaranteed system wide… …   Wikipedia

  • Lock-free and wait-free algorithms — In contrast to algorithms that protect access to shared data with locks, lock free and wait free algorithms are specially designed to allow multiple threads to read and write shared data concurrently without corrupting it. Lock free refers to the …   Wikipedia

  • Software transactional memory — In computer science, software transactional memory (STM) is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing. It is an alternative to lock based synchronization. A… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”