Lock-free and wait-free algorithms

Lock-free and wait-free algorithms

In contrast to algorithms that protect access to shared data with locks, lock-free and wait-free algorithms are specially designed to allow multiple threads to read and write shared data concurrently without corrupting it. "Lock-free" refers to the fact that a thread cannot lock up: every step it takes brings progress to the system. This means that no synchronization primitives such as mutexes or semaphores can be involved, as a lock-holding thread can prevent global progress if it is switched out. "Wait-free" refers to the fact that a thread can complete any operation in a finite number of steps, regardless of the actions of other threads. All wait-free algorithms are lock-free, but the reverse is not necessarily true. An intuitive way of understanding the difference between wait- and lock-free algorithms is that a thread executing an operation of a lock-free algorithm may not be impeded if another thread's execution is prematurely halted, whereas if the algorithm was wait-free, the first thread may not be impeded even if the second thread is aggressively interfering with the shared state.

Lock-free algorithms are one kind of non-blocking synchronization.

Motivation

The traditional approach to multi-threaded programming is to use locks to synchronize access to shared resources. Synchronization primitives such as mutexes, semaphores, and critical sections are all mechanisms by which a programmer can ensure that certain sections of code do not execute concurrently if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.

Blocking a thread is undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything. If the blocked thread is performing a high-priority or real-time task, it is highly undesirable to halt its progress. Other problems are less obvious. Certain interactions between locks can lead to error conditions such as deadlock, livelock, and priority inversion. Using locks also involves a trade-off between coarse-grained locking which can significantly reduce opportunities for parallelism, and fine-grained locking which requires more careful design and is more prone to bugs.

The lock-free approach

Writing a program that uses lock-free data structures is not a simple matter of merely rewriting the algorithms you would normally protect with a mutex to be lock-free. Because lock-free algorithms are so difficult to write, researchers focus on writing lock-free versions of basic data structures such as stacks, queues, sets, and hash tables. These allow programs to easily exchange data between threads asynchronously.

For example, consider a banking program where each thread represents a virtual teller. A lock-based approach to making a deposit could be to have one teller lock an account to make a deposit, so that two tellers don't try to deposit into the same account simultaneously. To make the process lock-free, rather than designing a lock-free "deposit" algorithm you might have the teller submit a "deposit" request asynchronously to a centralized thread that handled all deposits.

The previous example is a little misleading. There are formal definitions of the term "lock-free" which try to disallow anything that "looks" like a lock. The idea is that even if a thread crashes (or gets held up by something like priority inversion), the rest of the processes can still carry on in some way. Having a centralized thread would probably fail this definition (depending on how you define things, does a process that submits deposits that never get processed make progress?). A common approach to satisfying the formal definition is recursive helping. In the banking program example, a recursive helping approach would allow other threads to complete the deposit of a stopped or slow thread if it was getting in the way of their actions. Recursive helping in a way emphasizes the fault-tolerance aspect of lock-freedom over the performance aspect.

While some might say that the formal definition does not really capture the right idea (or should have a different name), it is something that should be kept in mind when claiming that an algorithm is lock-free.

Implementation

With few exceptions (discussed later), lock-free and wait-free algorithms can only be implemented on hardware that supports special atomic primitives, such as compare-and-swap (often notated "CAS") or Load-Link/Store-Conditional. [http://www.audiomulch.com/~rossb/code/lockfree/ "Some notes on lock-free and wait-free algorithms"] by Ross Bencina] ]

CAS(addr, old, new) = atomic if *addr = old then *addr := new ; return true else return false endif endatomic

The CAS takes three arguments: a memory address, an old value, and a new value. If the address contains the old value, it is replaced with the new value, otherwise it is unchanged. Critically, the hardware guarantees that this "comparison and swap" operation is executed "atomically". The success of this operation is then reported back to the program. This allows an algorithm to read a value from memory, modify it, and write it back only if no other thread modified it in the meantime.

For example, consider a different implementation of the banking program where each thread represents a virtual teller. The teller reads the current value of the account (old value), adds an amount and uses CAS to attempt to update the account balance. If no other thread has modified the account balance in the intervening period, the CAS will succeed, and the account balance will be updated. However, if a concurrent modification has occurred, the CAS will fail, and the teller will retry the update (by first fetching the new account balance). Each teller will perform this CAS operation in a loop, retrying until they are successful. This algorithm is lock-free but not wait-free, since other threads may keep writing new values and make the failing teller try again indefinitely.

This approach can be extended using a universal construction, due to Herlihy [ [http://cs.brown.edu/people/mph/Herlihy91/p124-herlihy.pdf Wait-Free Synchronization] ] , to any data structure. In this approach, the data structure essentially is updated in a purely functional way, then compare and swap is used to swing the pointer over to the new version of the data structure. This approach is a mostly theoretical construct.

There are few lock-free and wait-free algorithms that can be implemented without special atomic primitives. These exceptions include:
* single-reader single-writer ring buffer FIFO on a uniprocessor
* Read-copy-update with a single writer and any number of readers. (The readers are lock-free and wait-free; the writer is usually lock-free and wait-free, until it needs to reclaim memory).
* Dekker's algorithm for two threads is lock-free but not wait-free.

References

ee also

*ABA problem
*Concurrency control
*Deadlock
*Lock (software engineering)
*Memory barrier
*Mutual exclusion
*Non-blocking synchronization
*Pre-emptive multitasking
*Priority inversion
*Read-copy-update
*Resource starvation
*Room synchronization
*Software transactional memory

External links

*Survey " [http://www.audiomulch.com/~rossb/code/lockfree/ Some Notes on Lock-Free and Wait-Free Algorithms] " by Ross Bencina
*Javadoc:SE|package=java.util.concurrent.atomic|java/util/concurrent/atomic – supports lock-free and thread-safe programming on single variables
* [http://msdn2.microsoft.com/en-us/library/system.threading.interlocked.aspx System.Threading.Interlocked] - Provides atomic operations for variables that are shared by multiple threads (.NET Framework)
* [http://jail-ust.sourceforge.net/index.php?section=3&page=1 The Jail-Ust Container Library]
* [http://www.cl.cam.ac.uk/Research/SRG/netos/lock-free/ Practical lock-free data structures]
*Thesis " [http://www.cs.chalmers.se/~phs/phd.pdf Efficient and Practical Non-Blocking Data Structures] " (1414 KB) by Håkan Sundell
* [http://www.cs.chalmers.se/~phs/warp/project.html WARPing - Wait-free techniques for Real-time Processing]
* [http://www.cs.chalmers.se/~yzhang/thesis.pdf Non-blocking Synchronization: Algorithms and Performance Evaluation.] (1926 KB) by Yi Zhang
*" [http://dissertations.ub.rug.nl/faculties/science/2005/h.gao/ Design and verification of lock-free parallel algorithms] " by Hui Gao
*" [http://citeseer.ist.psu.edu/114960.html Asynchronous Data Sharing in Multiprocessor Real-Time Systems Using Process Consensus] " by Jing Chen and Alan Burns
*Discussion " [http://groups.google.com/groups?group=comp.programming.threads&threadm=ec1c3924.0410171103.568fa38a%40posting.google.com lock-free versus lock-based algorithms] "
* [http://atomic-ptr-plus.sourceforge.net/ Atomic Ptr Plus Project] - collection of various lock-free synchronization primitives
* [http://webpages.charter.net/appcore/ AppCore: A Portable High-Performance Thread Synchronization Library] - An Effective Marriage between Lock-Free and Lock-Based Algorithms
* [http://c2.com/cgi/wiki?WaitFreeSynchronization WaitFreeSynchronization] and [http://c2.com/cgi/wiki?LockFreeSynchronization LockFreeSynchronization] at the Portland Pattern Repository
* [http://www.hpl.hp.com/research/linux/atomic_ops/index.php4 Multiplatform library with atomic operations]
* [http://www.mgix.com/snippets/?LockFree A simple lock-free LIFO implementation]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Lock convoy — In computer science, a lock convoy is a performance problem that can occur when using locks for concurrency control in a multithreaded application. A lock convoy occurs when multiple threads of equal priority contend repeatedly for the same lock …   Wikipedia

  • Lock (computer science) — In computer science, a lock is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. Locks are one way of enforcing concurrency control policies. Contents 1 Types 2… …   Wikipedia

  • Compare-and-swap — In computer science, the compare and swap (CAS) CPU instruction is a special instruction that atomically compares the contents of a memory location to a given value and, only if they are the same, modifies the contents of that memory location to… …   Wikipedia

  • Double Compare And Swap — (DCAS or CAS2) is an atomic primitive proposed to support certain concurrent programming techniques. DCAS takes two memory locations and writes new values into them only if they match pre supplied expected values; as such, it is an extension of… …   Wikipedia

  • Double compare-and-swap — (DCAS or CAS2) is an atomic primitive proposed to support certain concurrent programming techniques. DCAS takes two not necessarily contiguous memory locations and writes new values into them only if they match pre supplied expected values; as… …   Wikipedia

  • Non-lock concurrency control — In Computer Science, in the field of databases, non lock concurrency control is a concurrency control method used in relational databases without using locking. There are several non lock concurrency control methods, which involve the use of… …   Wikipedia

  • Fetch-and-add — In computer science, the fetch and add CPU instruction is a special instruction that atomically modifies the contents of a memory location. It is used to implement Mutual exclusion and concurrent algorithms in multiprocessor systems.In… …   Wikipedia

  • Non-blocking algorithm — In computer science, a non blocking algorithm ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. A non blocking algorithm is lock free if there is guaranteed system wide… …   Wikipedia

  • Non-blocking synchronization — In computer science, non blocking synchronization ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion. Literature up to the turn of the century used non blocking synonymously …   Wikipedia

  • Deadlock — This article is about the computer science concept. For other uses, see Deadlock (disambiguation). A deadlock is a situation where in two or more competing actions are each waiting for the other to finish, and thus neither ever does. It is often… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”