Natural evolution strategy

Natural evolution strategy

Natural evolution strategies (NES) are a family of numerical optimization algorithms for black-box problems. Similar in spirit to evolution strategies, they iteratively update the (continuous) parameters of a search distribution by following the natural gradient towards higher expected fitness.

Contents

Method

The general procedure is as follows: the parameterized search distribution is used to produce a batch of search points, and the fitness function is evaluated at each such point. The distribution’s parameters (which include strategy parameters) allow the algorithm to adaptively capture the (local) structure of the fitness function. For example, in the case of a Gaussian distribution, this comprises the mean and the covariance matrix. From the samples, NES estimates a search gradient on the parameters towards higher expected fitness. NES then performs a gradient ascent step along the natural gradient, a second order method which, unlike the plain gradient, renormalizes the update w.r.t. uncertainty. This step is crucial, since it prevents oscillations, premature convergence, and undesired effects stemming from a given parameterization. The entire process reiterates until a stopping criterion is met.

All members of the NES family operate based on the same principles. They differ in the type of probability distribution and the gradient approximation method used. Different search spaces require different search distributions; for example, in low dimensionality it can be highly beneficial to model the full covariance matrix. In high dimensions, on the other hand, a more scalable alternative is to limit the covariance to the diagonal only. In addition, highly multi-modal search spaces may benefit from more heavy-tailed distributions (such as Cauchy, as opposed to the Gaussian). A last distinction arises between distributions where we can analytically compute the natural gradient, and more general distributions where we need to estimate it from samples.

Search gradients

Let θ denote the parameters of the search distribution \pi(x \,|\, \theta) and f(x) the fitness function evaluated at x. NES then pursues the objective of maximizing the expected fitness under the search distribution

J(\theta) = \operatorname{E}_\theta[f(x)] = \int f(x) \; \pi(x \,|\, \theta) \; dx

through gradient ascent. The gradient can be rewritten as

 \nabla_{\theta} J(\theta) = \nabla_{\theta} \int f(x) \; \pi(x \,|\, \theta) \; dx
   = \int f(x) \; \nabla_{\theta} \pi(x \,|\, \theta) \; dx
   = \int f(x) \; \nabla_{\theta} \pi(x \,|\, \theta) \; \frac{\pi(x \,|\, \theta)}{\pi(x \,|\, \theta)} \; dx
   = \int \Big[ f(x) \; \nabla_{\theta} \log\pi(x \,|\, \theta) \Big] \; \pi(x \,|\, \theta) \; dx
   = \operatorname{E}_\theta \left[ f(x) \; \nabla_{\theta} \log\pi(x \,|\, \theta)\right]

that is, the expected value of f(x) times the log-derivatives at x. In practice, it is possible to use the Monte Carlo approximation based on a finite number of λ samples

\nabla_{\theta} J(\theta) \approx 
   \frac{1}{\lambda} 
\sum_{k=1}^{\lambda} f(x_k) \; \nabla_{\theta} 
\log\pi(x_k \,|\, \theta).

Finally, the parameters of the search distribution can be updated iteratively

\theta \leftarrow \theta + \eta \nabla_{\theta} J(\theta)

Natural gradient ascent

Instead of using the plain stochastic gradient for updates, NES follows the natural gradient, which has been shown to possess numerous advantages over the plain (vanilla) gradient, e.g.:

  • the gradient direction is independent of the parameterization of the search distribution
  • the updates magnitudes are automatically adjusted based on uncertainty, in turn speeding convergence on plateaus and ridges.

The NES update is therefore

\theta \leftarrow \theta + \eta \mathbf{F}^{-1}\nabla_{\theta} J(\theta) ,

where \mathbf{F} is the Fisher information matrix. The Fisher matrix can sometimes be computed exactly, otherwise it is estimated from samples, reusing the log-derivatives \nabla_\theta \log\pi(x|\theta).

Fitness shaping

NES utilizes rank-based fitness shaping in order to render the algorithm more robust, and invariant under monotonically increasing transformations of the fitness function. For this purpose, the fitness of the population is transformed into a set of utility values u_1 \geq \dots \geq u_\lambda. Let xi denote the ith best individual. Replacing fitness with utility, the gradient estimate becomes

 \nabla_{\theta} J (\theta) = \sum_{k=1}^\lambda u_k \; \nabla_{\theta} \log\pi(x_k \,|\, \theta) .

The choice of utility function is a free parameter of the algorithm.

Pseudocode

input: f, \; \; \theta_{init}

1  repeat
   
2     for  k=1\ldots\lambda do                                              // λ is the population size
       
3         draw sample x_k \sim \pi(\cdot | \theta)
       
4         evaluate fitness f(xk)
       
5         calculate log-derivatives \nabla_\theta \log\pi(x_k | \theta)
       
6     end
   
7     assign the utilities uk                                          // based on rank
   
8     estimate the gradient \nabla_\theta J \leftarrow \frac{1}{\lambda}\sum_{k=1}^{\lambda} u_k \cdot \nabla_\theta\log\pi(x_k | \theta) 
   
9     estimate \mathbf{F}\leftarrow \frac{1}{\lambda}\sum_{k=1}^{\lambda} 
\nabla_\theta\log\pi(x_k | \theta)  
\nabla_\theta\log\pi(x_k | \theta)^{\top}           // or compute it exactly 
   
10    update parameters \theta \leftarrow \theta + \eta \cdot \mathbf{F}^{-1} \nabla_\theta J                        // η is the learning rate

11 until stopping criterion is met

See also

Bibliography

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Evolution strategy — In computer science, evolution strategy (ES) is an optimization technique based on ideas of adaptation and evolution. It belongs to the general class of evolutionary computation or artificial evolution methodologies. Contents 1 History 2 Methods… …   Wikipedia

  • evolution — evolutional, adj. evolutionally, adv. /ev euh looh sheuhn/ or, esp. Brit., /ee veuh /, n. 1. any process of formation or growth; development: the evolution of a language; the evolution of the airplane. 2. a product of such development; something… …   Universalium

  • Natural scientific research in Canada — This article outlines the history of natural scientific research in Canada, including mathematics, physics, astronomy, space science, geology, oceanography, chemistry, biology, medical research and psychology. The social sciences are not treated… …   Wikipedia

  • Evolution of sexual reproduction — The evolution of sexual reproduction is currently described by several competing scientific hypotheses. All sexually reproducing organisms derive from a common ancestor which was a single celled eukaryotic species[1]. Many protists reproduce… …   Wikipedia

  • The Strategy of the Dolphin — Strategy of the Dolphin: Scoring a Win in a Chaotic World ISBN 0 449 90529 2 is a 1989 business book written by Dudley Lynch and Paul L. Kordis. It has been translated into seven languages.harks, Carp, and DolphinsSeveral models of psychology,… …   Wikipedia

  • Natural Selection 2 — For other uses, see Natural selection (disambiguation). Natural Selection 2 Developer(s) Unknown Worlds Entertainment Distributor(s) …   Wikipedia

  • Evolution of cetaceans — The approximately 80 modern species in the order Cetacea. A phylogeny showing the …   Wikipedia

  • Evolution of monogamy — Close RelationshipsThe evolution of monogamy refers to the natural history of mating systems in which species reproduce by forming pairs to raise offspring. Animals The evolution of mating systems in animals has received an enormous amount of… …   Wikipedia

  • Natural product drug discovery — This article describes the utilization of natural resources in the process of finding new drug compounds, an approach commonly referred to as natural product drug discovery . Together with synthetic chemistry, they represent complementary… …   Wikipedia

  • Natural resource economics — Economics …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”