Temporal difference learning

Temporal difference learning

Temporal difference learning is a prediction method. It has been mostly used for solving the reinforcement learning problem. "TD learning is a combination of Monte Carlo ideas and dynamic programming (DP) ideas." [2] TD resembles a Monte Carlo method because it learns by sampling the environment according to some policy. TD is related to dynamic programming techniques because it approximates its current estimate based on previously learned estimates (a process known as bootstrapping). The TD learning algorithm is related to the Temporal difference model of animal learning.

As a prediction method, TD learning takes into account the fact that subsequent predictions are often correlated in some sense. In standard supervised predictive learning, one only learns from actually observed values: A prediction is made, and when the observation is available, the prediction is adjusted to better match the observation. The core idea, as elucidated in [1] , of TD learning is that we adjust predictions to match other, more accurate predictions, about the feature. This procedure is a form of bootstrapping as illustrated with the following example (taken from [1] ):

: Suppose you wish to predict the weather for Saturday and that you have some model that predicts Saturday's weather given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example Friday, you should have a pretty good idea of what the weather would be on Saturday - and thus be able to change, say, Monday's model before Saturday arrives.

Mathematically speaking, both in a standard and a TD approach, we would try to optimise some cost function, related to the error in our predictions of the expectation of some random variable, E [z] . However, while in the standard approach we in some sense assume E [z] =z (the actual observed value), in the TD approach we use a model. For the particular case of reinforcement learning, which is the major application of TD methods, z is the total return and E [z] is given by the Bellman equation of the return.

TD algorithm in neuroscience

The TD algorithm has also received attention in the field of Neuroscience. Researchers discovered that the firing rate of dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SNc) appear to mimic the error function in the algorithm [3] . The error function reports back the difference between the estimated reward at any given state or time step and the actual reward received. The larger the error function the larger the difference between the expected and actual reward. When this is paired with a stimulus that accurately reflects a future reward the error can be used to associate the stimulus with the future reward.

Dopamine cells appear to behave in a similar manner. In one experiment measurements of dopamine cells were made while training a monkey to associate a stimulus with the reward of juice [4] . Initially the dopamine cells increased firing rates when exposed to the juice, indicating a difference in expected and actual rewards. Over time this increase in firing back propagated to the earliest reliable stimulus for the reward. Once the monkey was fully trained the dopamine cells stopped firing. This mimics closely how the error function in TD is used for reinforcement learning.

The relationship between the model and potential neurological function has produced research attempting to use TD to explain many aspects of behavioral research [5] . It has also been used to study conditions such as schizophrenia or the consequences of pharmacological manipulations of dopamine on learning [6] .

Mathematical Background

Let lambda_t be the reinforcement on time step "t". Let ar V_t be the correct prediction that is equal to discounted sum of all future reinforcement. The discounting is done by powers of factor of gamma such that reinforcement at distant time step is less important. : ar V_t = sum_{i=0}^{infty} gamma^i lambda_{t+i} :: 0 le gamma < 1 This formula can be expanded : ar V_t = lambda_{t} + sum_{i=1}^{infty} gamma^i lambda_{t+i} by changing the index of i to start from 0.: ar V_t = lambda_{t} + sum_{i=0}^{infty} gamma^{i+1} lambda_{t+i+1} : ar V_t = lambda_{t} + gamma sum_{i=0}^{infty} gamma^i lambda_{t+1+i} : ar V_t = lambda_{t} + gamma ar V_{t+1}

Thus, the reinforcement is the difference between the ideal prediction and the current prediction.: lambda_{t} = ar V_{t} - gamma ar V_{t+1}

TD-Lambda is a learning algorithm invented by Richard Sutton based on earlier work on temporal difference learning by Arthur Samuel [2] . This algorithm was famously applied by Gerald Tesauro to create TD-Gammon, a program that can learn to play the game of backgammon nearly as well as expert human players. The lambda (lambda) parameter here refers to the trace decay parameter, with 0 le lambda le 1. Higher settings lead to longer lasting traces; that is, a larger proportion of credit from a reward can be given to more distal states and actions when lambda is higher, with lambda = 1 producing parallel learning to Monte Carlo RL algorithms.

See also

* Reinforcement learning
* Q-learning
* SARSA
* Rescorla-Wagner model

External links

* [http://scholarpedia.org/article/Temporal_Difference_Learning Scholarpedia Temporal difference Learning]
* [http://www.research.ibm.com/massive/tdl.html#h3:stochastic_environment TD-Gammon]
* [http://rlai.cs.ualberta.ca/TDNets/index.html TD-Networks Research Group]

References

[0] Sutton, R.S., Barto A.G. (1990) "Time Derivative Models of Pavlovian Reinforcement, Learning and Computational Neuroscience" (available [http://www.cs.ualberta.ca/~sutton/papers/sutton-barto-90.pdf here] ).

[1] Richard Sutton. Learning to predict by the methods of temporal differences. "Machine Learning" 3:9-44. 1988. (A revised version is available on [http://www.cs.ualberta.ca/~sutton/publications.html Richard Sutton's publication page] )

[2] Richard Sutton and Andrew Barto. "Reinforcement Learning". MIT Press, 1998. (available [http://www-anw.cs.umass.edu/~rich/book/the-book.html online] )

[3] Schultz, W, Dayan, P & Montague, PR. 1997. A neural substrate of prediction and reward. Science 275:1593-1599.

[4] Schultz W. 1998. Predictive reward signal of dopamine neurons. J Neurophysiology 80:1-27.

[5] Dayan P. 2002. Motivated reinforcement learning. In: Ghahramani T, editor. Advances in neural information processing system, Cambridge, MA: MIT Press.

[6] Smith, A., Li, M., Becker, S. and Kapur, S. (2006), Dopamine, prediction error, and associative learning: a model-based account. Network: Computation in Neural Systems 17(1):61-84.

[7] Gerald Tesauro. Temporal Difference Learning and TD-Gammon. "Communications of the ACM", March 1995 / Vol. 38, No. 3. (available at [http://www.research.ibm.com/massive/tdl.html Temporal Difference Learning and TD-Gammon] )

[8] Imran Ghory. Reinforcement Learning in Board Games. http://www.cs.bris.ac.uk/Publications/Papers/2000100.pdf

[9] S. P. Meyn, 2007. [http://decision.csl.uiuc.edu/~meyn/pages/CTCN/CTCN.html Control Techniques for Complex Networks] , Cambridge University Press, 2007. See final chapter, and appendix with abridged [http://decision.csl.uiuc.edu/~meyn/pages/book.html Meyn & Tweedie] .


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Difference due to Memory — (Dm) indexes differences in neural activity during the study phase of an experiment for items that subsequently are remembered compared to items that are later forgotten. It is mainly discussed as an event related potential (ERP) effect that… …   Wikipedia

  • Difference and Repetition —   …   Wikipedia

  • Reinforcement learning — Inspired by related psychological theory, in computer science, reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long term reward .… …   Wikipedia

  • Q-learning — is a reinforcement learning technique that works by learning an action value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. A strength with Q learning is that it is able …   Wikipedia

  • Learning disability — In the United States and Canada, the term learning disability (LD) refers to a group of disorders that affect a broad range of academic and functional skills including the ability to speak, listen, read, write, spell, reason and organize… …   Wikipedia

  • animal learning — ▪ zoology Introduction       the alternation of behaviour as a result of individual experience. When an organism can perceive and change its behaviour, it is said to learn.       That animals can learn seems to go without saying. The cat that… …   Universalium

  • Multimedia learning — is the common name used to describe the cognitive theory of multimedia learning[1][2][3] This theory encompasses several principles of learning with multimedia. Contents 1 …   Wikipedia

  • SARSA — (State Action Reward State Action) is an algorithm for learning a Markov Decision Process policy, used in the Reinforcement Learning area of Machine Learning. It was introduced in the technical note Online Q Learning using Connectionist Systems… …   Wikipedia

  • Dopamine — For other uses, see Dopamine (disambiguation). Dopamine …   Wikipedia

  • Backgammon — A backgammon set, consisting of a board, two sets of 15 checkers, two pairs of dice, a doubling cube, and dice cups Years active Approximately 5,000 years ago to present Genre(s) Board game, dice game …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”