Viterbi algorithm

Viterbi algorithm

The Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states – called the Viterbi path – that results in a sequence of observed events, especially in the context of Markov information sources, and more generally, hidden Markov models. The forward algorithm is a closely related algorithm for computing the probability of a sequence of observed events. These algorithms belong to the realm of information theory.

The algorithm makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an instance of an observed event needs to correspond to exactly one instance of a hidden event. Third, computing the most likely hidden sequence up to a certain point "t" must depend only on the observed event at point "t", and the most likely sequence at point "t" − 1. These assumptions are all satisfied in a first-order hidden Markov model.

The terms "Viterbi path" and "Viterbi algorithm" are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the "Viterbi parse".

The Viterbi algorithm was conceived by Andrew Viterbi in 1967 as an error-correction scheme for noisy digital communication links, finding universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.

Overview

The assumptions listed above can be elaborated as follows. The Viterbi algorithm operates on a state machine assumption. That is, at any time the system being modeled is in some state. There are a finite number of states, however large, that can be listed. Each state is represented as a node. Multiple sequences of states (paths) can lead to a given state, but one is the most likely path to that state, called the "survivor path". This is a fundamental assumption of the algorithm because the algorithm will examine all possible paths leading to a state and only keep the one most likely. This way the algorithm does not have to keep track of all possible paths, only one per state.

A second key assumption is that a transition from a previous state to a new state is marked by an incremental metric, usually a number. This transition is computed from the event. The third key assumption is that the events are cumulative over a path in some sense, usually additive. So the crux of the algorithm is to keep a number for each state. When an event occurs, the algorithm examines moving forward to a new set of states by combining the metric of a possible previous state with the incremental metric of the transition due to the event and chooses the best. The incremental metric associated with an event depends on the transition possibility from the old state to the new state. For example in data communications, it may be possible to only transmit half the symbols from an odd numbered state and the other half from an even numbered state. Additionally, in many cases the state transition graph is not fully connected. A simple example is a car that has 3 states — forward, stop and reverse — and a transition from forward to reverse is not allowed. It must first enter the stop state. After computing the combinations of incremental metric and state metric, only the best survives and all other paths are discarded. There are modifications to the basic algorithm which allow for a forward search in addition to the backwards one described here.

Path history must be stored. In some cases, the search history is complete because the state machine at the encoder starts in a known state and there is sufficient memory to keep all the paths. In other cases, a programmatic solution must be found for limited resources: one example is convolutional encoding, where the decoder must truncate the history at a depth large enough to keep performance to an acceptable level. Although the Viterbi algorithm is very efficient and there are modifications that reduce the computational load, the memory requirements tend to remain constant.

A concrete example

Alice talks to Bob three days in a row and discovers that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. Alice has two questions: What is the overall probability of this sequence of observations? And what is the most likely sequence of rainy/sunny days that would explain these observations? The first question is answered by the forward algorithm; the second question is answered by the Viterbi algorithm. These two algorithms are structurally so similar (in fact, they are both instances of the same abstract algorithm) that they can be implemented in a single function:def forward_viterbi(obs, states, start_p, trans_p, emit_p): T = {} for state in states: ## prob. V. path V. prob. T [state] = (start_p [state] , [state] , start_p [state] ) for output in obs: U = {} for next_state in states: total = 0 argmax = None valmax = 0 for source_state in states: (prob, v_path, v_prob) = T [source_state] p = emit_p [source_state] [output] * trans_p [source_state] [next_state] prob *= p v_prob *= p total += prob if v_prob > valmax: argmax = v_path + [next_state] valmax = v_prob U [next_state] = (total, argmax, valmax) T = U ## apply sum/max to the final states: total = 0 argmax = None valmax = 0 for state in states: (prob, v_path, v_prob) = T [state] total += prob if v_prob > valmax: argmax = v_path valmax = v_prob return (total, argmax, valmax)The function forward_viterbi takes the following arguments: obs is the sequence of observations, e.g. ['walk', 'shop', 'clean'] ; states is the set of hidden states; start_p is the start probability; trans_p are the transition probabilities; and emit_p are the emission probabilities.

The algorithm works on the mappings T and U. Each is a mapping from a state to a triple (prob, v_path, v_prob), where prob is the total probability of all paths from the start to the current state (constrained by the observations), v_path is the Viterbi path up to the current state, and v_prob is the probability of the Viterbi path up to the current state. The mapping T holds this information for a given point "t" in time, and the main loop constructs U, which holds similar information for time "t"+1. Because of the Markov property, information about any point in time prior to "t" is not needed.

The algorithm begins by initializing "T" to the start probabilities: the total probability for a state is just the start probability of that state; and the Viterbi path to a start state is the singleton path consisting only of that state; the probability of the Viterbi path is the same as the start probability.

The main loop considers the observations from obs in sequence. Its loop invariant is that T contains the correct information up to but excluding the point in time of the current observation. The algorithm then computes the triple (prob, v_path, v_prob) for each possible next state. The total probability of a given next state, total is obtained by adding up the probabilities of all paths reaching that state. More precisely, the algorithm iterates over all possible source states. For each source state, T holds the total probability of all paths to that state. This probability is then multiplied by the emission probability of the current observation and the transition probability from the source state to the next state. The resulting probability prob is then added to total. The probability of the Viterbi path is computed in a similar fashion, but instead of adding across all paths one performs a discrete maximization. Initially the maximum value valmax is zero. For each source state, the probability of the Viterbi path to that state is known. This too is multiplied with the emission and transition probabilities and replaces valmax if it is greater than its current value. The Viterbi path itself is computed as the corresponding argmax of that maximization, by extending the Viterbi path that leads to the current state with the next state. The triple (prob, v_path, v_prob) computed in this fashion is stored in U and once U has been computed for all possible next states, it replaces T, thus ensuring that the loop invariant holds at the end of the iteration.

In the end another summation/maximization is performed (this could also be done inside the main loop by adding a pseudo-observation after the last real observation).

In the running example, the forward/Viterbi algorithm is used as follows:

def example(): return forward_viterbi(observations, states, start_probability, transition_probability, emission_probability)print example()

This reveals that the total probability of ['walk', 'shop', 'clean'] is 0.033612 and that the Viterbi path is ['Sunny', 'Rainy', 'Rainy', 'Rainy'] , with probability 0.009408. The Viterbi path contains four states because the third observation was generated by the third state and a transition to the fourth state. In other words, given the observed activities, it was most likely sunny when Bob went for a walk and then it started to rain the next day and kept on raining.

When implementing this algorithm, it should be noted that many languages use Floating Point arithmetic - as p is small, this may lead to underflow in the results. A common technique to avoid this is to take the logarithm of the probabilities and use it throughout the computation, the same technique used in the Logarithmic Number System. Once the algorithm has terminated, an accurate value can be obtained by performing the appropriate exponentiation.

Extensions

With the algorithm called iterative Viterbi decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence.

An alternate algorithm, the Lazy Viterbi algorithm, has been proposed recently. This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result - however, it is not so easy to parallelize in hardware.

ee also

* Baum-Welch algorithm
* Forward-backward algorithm
* Error-correcting code
* Soft output Viterbi algorithm
* Viterbi decoder

References

* Andrew J. Viterbi. [http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1054010 Error bounds for convolutional codes and an asymptotically optimum decoding algorithm] , "IEEE Transactions on Information Theory" 13(2):260–269, April 1967. (The Viterbi decoding algorithm is described in section IV.)

* G. D. Forney. [http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1450960 The Viterbi algorithm] . "Proceedings of the IEEE" 61(3):268–278, March 1973.

* L. R. Rabiner. doi-inline|10.1109/5.18626|A tutorial on hidden Markov models and selected applications in speech recognition. "Proceedings of the IEEE" 77(2):257–286, February 1989. (Describes the forward algorithm and Viterbi algorithm for HMMs).

* J Feldman, I Abou-Faycal and M Frigo. A Fast Maximum-Likelihood Decoder for Convolutional Codes.

External links

* [http://search.cpan.org/~koen/Algorithm-Viterbi-0.01/lib/Algorithm/Viterbi.pm An implementation of the Viterbi algorithm in Perl]
* [http://www.biais.org/blog/index.php/2007/09/05/52-viterbi-algorithm-variant-in-python An implementation of a variant of the Viterbi algorithm in Python]
* [http://pcarvalho.com/forward_viterbi/ An implementation of the demonstrated Viterbi algorithm in C#]


Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Soft output Viterbi algorithm — The soft output Viterbi algorithm (SOVA) is a variant of the classical Viterbi algorithm.SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account the a priori probabilities of the input… …   Wikipedia

  • Viterbi — can mean:*Viterbi algorithm, an algorithm in signal processing. *Viterbi decoder, a device implementing the above algorithm. *Andrew Viterbi, the inventor of the above algorithm. *Viterbi School of Engineering, located at the University of… …   Wikipedia

  • Viterbi decoder — A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code. There are other algorithms for decoding a convolutionally encoded stream (for example, the… …   Wikipedia

  • Viterbi — Der Viterbi Algorithmus ist ein Algorithmus der Dynamischen Programmierung zur Bestimmung der wahrscheinlichsten Sequenz von versteckten Zuständen bei einem gegebenen Hidden Markov Model und einer beobachteten Sequenz von Symbolen. Diese… …   Deutsch Wikipedia

  • Viterbi-Algorithmus — Der Viterbi Algorithmus ist ein Algorithmus der Dynamischen Programmierung zur Bestimmung der wahrscheinlichsten Sequenz von versteckten Zuständen bei einem gegebenen Hidden Markov Model und einer beobachteten Sequenz von Symbolen. Diese… …   Deutsch Wikipedia

  • Viterbi School of Engineering — The Viterbi School of Engineering (formerly the USC School of Engineering) is located at the University of Southern California in the United States. It was renamed following a $52 million donation by Andrew Viterbi. The USC Viterbi School of… …   Wikipedia

  • Andrea Viterbi — Der Viterbi Algorithmus ist ein Algorithmus der Dynamischen Programmierung zur Bestimmung der wahrscheinlichsten Sequenz von versteckten Zuständen bei einem gegebenen Hidden Markov Model und einer beobachteten Sequenz von Symbolen. Diese… …   Deutsch Wikipedia

  • Andrew Viterbi — Der Viterbi Algorithmus ist ein Algorithmus der Dynamischen Programmierung zur Bestimmung der wahrscheinlichsten Sequenz von versteckten Zuständen bei einem gegebenen Hidden Markov Model und einer beobachteten Sequenz von Symbolen. Diese… …   Deutsch Wikipedia

  • Iterative Viterbi decoding — is an algorithm that spots the subsequence S of an observation O = { o 1, ..., o n } having the highest average probability (i.e., probability scaled by the length of S ) of being generated by a given hidden Markov model M with m states. The… …   Wikipedia

  • Algorithme De Viterbi — L algorithme de Viterbi, d Andrew Viterbi, permet de corriger les erreurs survenues lors d une transmission à travers un canal bruité (dans une certaine mesure). Son utilisation s appuie sur la connaissance du canal bruité (la probabilité qu une… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”