Markov network

Markov network

A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set mathcal{X} of random variables having the Markov property. A Markov network is similar to a Bayesian network in its representation of dependencies. It can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The prototypical Markov network is the Ising model; indeed, the Markov network was introduced as the general setting for the Ising model [Ross Kindermann and J. Laurie Snell, [http://www.ams.org/online_bks/conm1/ Markov Random Fields and Their Applications] (1980) American Mathematical Society, ISBN 0-8218-5001-6] .

Formal Definition

Formally, a Markov network consists of:

* an undirected graph "G" = ("V","E"), where each vertex "v" ∈"V" represents a random variable in mathcal{X} and each edge {"u","v"} ∈ "E" represents a dependency between the random variables "u" and "v",
* a set of functions f_k (also called "factors" or "clique factors" and sometimes "features"), where each f_k has the domain of some clique (or subclique) "k" in "G". Each f_k is a mapping from possible joint assignments (to the elements of "k") to non-negative real numbers.

Joint Distribution Function

The joint distribution (or Gibbs measure) represented by a Markov network is given by:

: P(X=x) = frac{1}{Z} prod_{k} f_k (x_{ { k )

where x_{ { k is the state of the random variables in the "k"th clique, and the product runs over all the cliques in the graph. Here, Z is the partition function, so that

: Z = sum_{x isin mathcal{X prod_{k} f_k(x_{ { k } }).

In practice, a Markov network is often conveniently expressed as a log-linear model, given by

:f_k=exp left(w_k phi_k (x_{ { k ) ight)

so that

: P(X=x) = frac{1}{Z} exp left( sum_{k} w_k phi_k (x_{ { k ) ight)

with partition function

: Z = sum_{x isin mathcal{X exp left(sum_{k} w_kphi_k(x_{ { k } }) ight).

In this context, the w_k's are weights and the phi_k's are potential functions from the clique k to the reals. These functions are also sometimes called Gibbs potentials; the term "potential" comes from physics, where these can be literally interpreted as the potential energy between nearest neighbors.

Log-linear models are especially convenient for their interpretation. A log-linear model can provide a much more compact representation for many distributions, especially when variables have large domains. They are convenient too because their negative log likelihoods are convex. Unfortunately, though the likelihood of a log-linear Markov network is convex, evaluating the likelihood or gradient of the likelihood of a model requires inference in the model, which is in general computationally infeasible.

Markov property

Markov networks have the Markov property: the probability that a vertex "u" of the graph is in the state x_u depends only on the nearest neighbors of the vertex "u"; the vertex is conditionally independent of all of the other verticies in the graph. One writes this as

:P(X_u=x_u|X_v, v e u) = P(X_u=x_u|X_v, visin N_u)

The set N_u of the nearest neighbors of "u" is sometimes referred to as the Markov blanket of "u".

Conversely, one may "define" a Markov network to be a collection of random variables having the Markov property; in this case, it may be proven that such a network can "always" be written in terms of the Gibbs measure, as shown above. The potential thus obtained is unique, up to an additive constant; this potential is called the canonical potential. Precisely, there is at least one state of the variables "X" which has maximum probability; this state is referred to as the ground state. The canonical potential is defined so that the potential of the ground state is zero.

Inference

As in a Bayesian network, one may calculate the conditional distribution of a set of nodes V' = { v_1 ,..., v_i } given values to another set of nodes W' = { w_1 ,..., w_j } in the Markov network by summing over all possible assignments to u otin V',W'; this is called exact inference. However, exact inference is a #P-complete problem, and thus computationally intractable in the general case. Approximation techniques such as Markov chain Monte Carlo and loopy belief propagation are often more feasible in practice. Some particular subclasses of MRFs, such as trees, have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs which permit efficient MAP, or most likely assignment, inference; examples of these include associative networks.

Conditional Random Fields

One notable variant of a Markov network is a conditional random field, in which each random variable may also be conditioned upon a set of global observations o. In this model, each function phi_k is a mapping from all assignments to both the clique "k" and the observations o to the nonnegative real numbers. This form of the Markov network may be more appropriate for producing discriminative classifiers, which do not model the distribution over the observations.

See also

* Maximum entropy method
* Hopfield network
* Graphical model
* Markov chain
* Markov logic network

References


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Markov net — can refer to a number of things: Markov network. A framework recently proposed by Muchnik and Solomon in an upcoming article. This disambiguation page lists articles associated with the same title. If an internal link led you here …   Wikipedia

  • Markov logic network — A Markov logic network (or MLN) is a probabilistic logic which applies the ideas of a Markov network to first order logic, enabling uncertain inference. Markov logic networks generalize first order logic, in the sense that, in a certain limit,… …   Wikipedia

  • Markov random field — A Markov random field, Markov network or undirected graphical model is a set of variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies. It… …   Wikipedia

  • Markov blanket — In a Bayesian network, the Markov blanket of node A includes its parents, children and the other parents of all of its children. In machine learning, the Markov blanket for a node A in a Bayesian network is the set of nodes composed of A s… …   Wikipedia

  • Markov chain — A simple two state Markov chain. A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized …   Wikipedia

  • Markov model — In probability theory, a Markov model is a stochastic model that assumes the Markov property. Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. Contents 1 Introduction 2 Markov chain… …   Wikipedia

  • Network On Chip — or Network on a Chip (NoC or NOC) is an approach to designing the communication subsystem between IP cores in a System on a Chip (SoC). NoCs can span synchronous and asynchronous clock domains or use unclocked asynchronous logic. NoC applies… …   Wikipedia

  • Markov property — In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It was named after the Russian mathematician Andrey Markov.[1] A stochastic process has the Markov property if the… …   Wikipedia

  • Network simulation — In communication and computer network research, network simulation is a technique where a program models the behavior of a network either by calculating the interaction between the different network entities (hosts/routers, data links, packets,… …   Wikipedia

  • Andrey Markov — For other people named Andrey Markov, see Andrey Markov (disambiguation). Andrey (Andrei) Andreyevich Markov Born June 14, 1856( …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”