Feynman diagram

Feynman diagram

The Wick's expansion of the integrand gives (among others) the following term

Narpsi(x)gamma^mupsi(x)arpsi(x')gamma^ upsi(x')underline{A_mu(x)A_ u(x')};,

where

underline{A_mu(x)A_ u(x')}=int{d^4pover(2pi)^4}{ig_{mu u}over k^2+i0}e^{-k(x-x')}

is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes:
# e^-e^- scattering (initial state at the right, final state at the left of the diagram);
# e^+e^+ scattering (initial state at the left, final state at the right of the diagram);
# e^-e^+ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram).

Compton scattering and annihilation/generation of e^-e^+ pairs

Another interesting term in the expansion is

:Narpsi(x)gamma^muunderline{psi(x)arpsi(x')}gamma^ upsi(x')A_mu(x)A_ u(x');,

where

:underline{psi(x)arpsi(x')}=int{d^4kover(2pi)^4}{iover gamma p-m+i0}e^{-p(x-x')}

is the fermionic contraction (propagator).

Path integral formulation

In a path-integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory should have a good ground state, and the integral should be performed a little bit rotated into imaginary time.

Scalar Field Lagrangian

A simple example is the free relativistic scalar field in d-dimensions, whose action integral is::: S = int {1over 2} partial_mu phi partial^mu phi d^dx

The probability amplitude for a process is:

:: int_A^B e^{iS} Dphi

where A and B are space-like hypersurfaces which define the boundary conditions. The collection of all the phi(A) on the starting hypersurface give the initial value of the field, analogous to the starting position for a point particle, and the field values phi(B) at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude.

The path integral gives the expectation value of operators between the initial and final state:

:: int_A^B e^{iS} phi(x_1) ... phi(x_n) Dphi = langle A| phi(x_1) ... phi(x_n) |B angle

and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral should be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant doesn't change anything:

:: {int e^{iS} phi(x_1) ... phi(x_n) Dphi over int e^{iS} Dphi } = langle 0 | phi(x_1) .... phi(x_n) |0 angle

The normalization factor on the bottom is called the "partition function" for the field, and it coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time.

The initial-to-final amplitudes are ill-defined if you think of things in the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral should be thought of as on a discrete square lattice, with lattice spacing a and the limit a ightarrow 0 should be taken carefully. If the final results do not depend on the shape of the lattice or the value of a, then the continuum limit exists.

On a lattice, the field can be expanded in Fourier modes:::phi(x) = int {dkover (2pi)^d} phi(k) e^{ikcdot x} = int_k phi(k) e^{ikx}

Where the integration domain is over k restricted to a cube of side length 2pi/a, so that large values of k are not allowed. It is important to note that the k measure contains the factors of 2pi from Fourier transforms, this is the best standard convention for k integrals in QFT. The lattice means that fluctuations at large k are not allowed to contribute right away, they only start to contribute in the limit a ightarrow 0. Sometimes, instead of a lattice, the field modes are just cut off at high values of k instead.

It is also convenient from time to time to consider the space-time volume to be finite, so that the k modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in k are not localized, but it is convenient for keeping track of the factors in front of the k-integrals and the momentum-conserving delta functions which will arise.

On a lattice, the action needs to be discretized::: S= sum_{} {1over 2} (phi(x) - phi(y) )^2,

where means that x and y are nearest lattice neighbors. The discretization should be thought of as defining what the derivative partial_mu phi means.

In terms of the lattice Fourier modes, the action can be written:::S= int_k ( (1-cos(k_1)) +(1-cos(k_2)) + ... + (1-cos(k_d)) )phi^*_k phi^kWhich for k near zero is:::S = int_k {1over 2} k^2 |phi(k)|^2

Which is the continuum Fourier transform of the original action. In finite volume, the quantity d^dk is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or (2pi/V)^d.

The field phi is real valued, so the Fourier transform obeys:

:: phi(k)^* = phi(-k),

In terms of real and imaginary parts, the real part of phi(k) is an even function of k, while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written:

:: S = int_k {1over 2} k^2 phi(k) phi(-k)

over an integration domain which integrates over each pair (k,-k) exactly once.

For a complex scalar field with action:

:: S = int {1over 2} partial_muphi^* partial^muphi d^dx

The Fourier transform is unconstrained:

:: S = int_k {1over 2} k^2 |phi(k)|^2

and the integral is over all k.

Integrating over all different values of phi(x) is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If:: y_i = A_{ij} x_j,

Then::det(A) int dx_1 dx_2 ... dx_n = int dy_1 dy_2 ... dy_n

If A is a rotation, then::A^T A = I,so that det A = pm 1, and the sign depends on whether the rotation includes a reflection or not.

The matrix which changes coordinates from phi(x) to phi(k) can be read off from the definition of a Fourier transform.

:: A_{kx} = e^{ikx} ,

and the Fourier inversion theorem tells you the inverse:

:: A^{-1}_{kx} = e^{-ikx} ,

which is the complex conjugate-transpose, up to factors of 2pi. On a finite volume lattice, the determinant is nonzero and independent of the field values.:: det A = 1 ,

and the path integral is a separate factor at each value of k.

:: int exp(i sum_k phi^*(k) phi(k)) Dphi = prod_k int_{phi_k} e^iover 2} k^2 |phi_k|^2 d^dk } ,

and each separate factor is an oscillatory Gaussian.

In imaginary time, the "Euclidean action"' becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values phi_k is:: e^{int_k - {1over 2} k^2 phi^*_k phi_k} = prod_k e^{- k^2 |phi_k|^2 d^dk}

The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution:

::langle phi(x_1) ... phi(x_n) angle = { int e^{-S} phi(x_1) ... phi(x_n) Dphi over int e^{-S} Dphi} Since the probability of phi_k is a product, the value of phi(k) at each separate value of k is independently Gaussian distributed. The variance of the Gaussian is 1/(k^2 d^dk), which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is V/k^2.

Monte-Carlo

The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber k to be a gaussian random variable with variance 1/k^2. This generates a configuration phi_C(k) at random, and the Fourier transform gives phi_C(x). For real scalar fields, the algorithm must generate only one of each pair phi(k),phi(-k), and make the second the complex conjugate of the first.

To find any correlation function, generate a field again and again by this procedure, and find the statistical average:

:: langle phi(x_1) ... phi(x_n) angle = lim_{|C| ightarrowinfty}{ sum_C phi_C(x_1) ... phi_C(x_n) over |C| }

where |C| is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions.

For free fields with a quadratic action, the probability distribution is a high dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte-carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions.

Scalar Propagator

Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate:

:: = 0 ,

for k e k', since then the two gaussian random variables are independent and both have zero mean.

:: = {V over k^2}

in finite volume V, when the two k-values coincide, since this is the variance of the Gaussian. In the infinite volume limit,

:: = delta(k-k') {1over k^2}

Strictly speaking, this is an approximation: the lattice propagator is:

:: = delta(k-k') {1over 2(d - cos(k_1) + cos(k_2) ... + cos(k_d)) }

But near k=0, for field fluctuations long compared to the lattice spacing, the two forms coincide.

It is important to emphasize that the delta functions contain factors of 2pi, so that they cancel out the 2pi factors in the measure for k integrals.

:: delta(k) = (2pi)^d delta_D(k_1)delta_D(k_2) ... delta_D(k_d) ,

where delta_D(k) is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal--- some authors keep the factors of 2pi in the delta functions (and in the k-integration) explicit.

Equation of Motion

The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is:

:: partial_mu partial^mu phi = 0,

and in an expectation value, this says:

::partial_mupartial^mu langle phi(x) phi(y) angle =0

Where the derivatives act on x, and the identity is true everywhere except when x and y coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (euclidean) "Feynman propagator" Delta as the fourier transform of the time-ordered two-point function (the one that comes from the path-integral):

:: partial^2 Delta (x) = idelta(x),

So that:

:: Delta(k) = {iover k^2}

If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix which defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the Path integral. The factor of i disappears in the Euclidean theory.

Wick Theorem

Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys "Wick's theorem":

:: langle phi(k_1) phi(k_2) ... phi(k_n) angle

is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of phi's, and for an even number of phi's, it is equal to a contribution from each pair separately, with a delta function.

::langle phi(k_1) ... phi(k_{2n}) angle = sum prod_{i,j} {delta(k_i - k_j) over k_i^2 }

where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example,

:: langle phi(k_1) phi(k_2) phi(k_3) phi(k_4) angle = {delta(k_1 -k_2) over k_1^2}{delta(k_3-k_4)over k_3^2} + {delta(k_1-k_3) over k_3^2}{delta(k_2-k_4)over k_2^2} + {delta(k_1-k_4)over k_1^2}{delta(k_2 -k_3)over k_2^2}

An intepretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator.

Higher Gaussian moments--- completing Wick's theorem

There is a subtle point left before Wick's theorem is proved--- what if more than two of the phi's have the same momentum? If its an odd number, the integral is zero, negative values cancel with the positive values, But if the number is even, the integral is positive. The previous demonstration assumed that the phi's would only match up in pairs.

But the theorem is correct even when arbitrarily many of the phis are equal, and this is a notable property of Gaussian integration::: I = int e^{-ax^2/2} = sqrt{2piover a} :: {partial^n over partial a^n } I = int {x^{2n} over 2^n} e^{-ax^2} = {1cdot 3 cdot 5 ... cdot (2n-1) over 2 cdot 2 cdot 2 ... ;;;;;cdot 2;;;;;;} sqrt{2pi} a^{-{2n+1over2

Dividing by I,

:: langle x^{2n} angle={int x^{2n} e^{-a x^2} over int e^{-a x^2} } = 1 cdot 3 cdot 5 ... cdot (2n-1) {1over a^n} :: langle x^2 angle = {1over a}

If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of 2n x's:

:: langle x_1 x_2 x_3 ... x_{2n} angle

where the x-s are all the same variable, the index is just to keep track of the number of ways to pair them. The first x can be paired with 2n-1 others, leaving 2n-2. The next unpaired x can be paired with 2n-3 different x's leaving 2n-4, and so on. This means that Wick's theorem, uncorrected, says that the expectation value of x^{2n} should be:

:: langle x^{2n} angle = (2n-1)cdot(2n-3).... cdot5 cdot 3 cdot 1 (langle x^2 angle)^n

and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide.

Interaction

Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action:

:: S = int partial^mu phi partial_muphi + {lambda over 4!} phi^4.

The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes:

:: S = int_k k^2 |phi(k)|^2 + int_{k_1k_2k_3k_4} phi(k_1) phi(k_2) phi(k_3)phi(k_4) delta(k_1+k_2+k_3 + k_4) = S_F + X.

Where S_F is the free action, whose correlation functions are given by Wick's theorem. The exponential of S in the path integral can be expanded in powers of lambda, giving a series of corrections to the free action.

:: e^{-S} = e^{-S_F} ( 1 + X + {1over 2!} X X + {1over 3!} X X X + ... )

The path integral for the interacting action is then a power series of corrections to the free action. The term represented by X should be thought of as four half-lines, one for each factor of phi(k). The half-lines meet at a vertex, which contributes a delta-function which ensures that the sum of the momenta are all equal.

To compute a correlation function in the interacting theory, there is a contribution from the X terms now. For example, the path-integral for the four-field correlator:

::langle phi(k_1) phi(k_2) phi(k_3) phi(k_4) angle = {int e^{-S} phi(k_1)phi(k_2)phi(k_3)phi(k_4) Dphi over Z}

which in the free field was only nonzero when the momenta k were equal in pairs, is now nonzero for all values of the k. The momenta of the insertions phi(k_i) can now match up with the momenta of the X's in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum k, but one which is not integrated.

The lowest order contribution comes from the first nontrivial term e^{-S_F} X in the Taylor expansion of the action. Wick's theorem requires that the momenta in the X half-lines, the phi(k) factors in X, should match up with the momenta of the external half-lines in pairs. The new contribution is equal to:

:: lambda {1over k_1^2} {1over k_2^2} {1over k_3^2} {1over k_4^2}.

The 4! inside X is canceled because there are exactly 4! ways to match the half-lines in X to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of the k's, by Wick's theorem.

Feynman Diagrams

The expansion of the action in powers of X gives a series of terms with progressively higher number of X's. The contribution from the term with exactly n X's are called n-th order.

The n-th order terms has:
# 4n internal half-lines, which are the factors of phi(k) from the X's. These all end on a vertex, and are integrated over all possible k.
# external half-lines, which are the come from the phi(k) insertions in the integral.

By Wick's theorem, each pair of half-lines must be paired together to make a "line", and this line gives a factor of

:: delta(k_1 + k_2) over k_1^2

which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line k. The half-line at the tail end of the arrow carries momentum k, while the half-line at the head-end carries momentum -k. If one of the two half-lines is external, this kills the integral over the internal k, since it forces the internal k to be equal to the external k. If both are internal, the integral over k remains.

The diagrams which are formed by linking the half-lines in the X's with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of 1over k^2, the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming k is equal to the total outgoing k.

The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex.

Loop Order

A tree diagram is one where all the internal lines have momentum which is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration.

An example of a tree diagram is the one where each of four external lines end on an X. Another is when eight external lines end on two X's. A third is when three external lines end on an X, and the remaining half-line joins up with another X, and the remaining half-lines of thisX run off to external lines.

It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex.

A diagram which is not a tree diagram is called a "loop" diagram, and an example is one where two lines of an X are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two X's are joined to each other by matching the legs one to the other. This diagram has no external lines at all.

The reason loop diagrams are called loop diagrams is because the number of k-integrals which are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually R^d valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the k-valued weighted graph is zero.

A set of k-values can be relabeled whenever there is a closed loop going from vertex to vertex, never revisiting the same vertex. Such a cycle can be thought of as the boundary of a 2-cell. The k-labelings of a graph which conserve momentum (which have zero boundary) up to redefinitions of k (up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta which are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way.

Symmetry factors

The number of ways to form a given Feynman diagram by joining together half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete.

The uncancelled denominator is called the "symmetry factor" of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor.

For example, consider the Feynman diagram formed from two external lines joined to one X, and the remaining two half-lines in the X joined to each other. There are 4*3 ways to join the external half-lines to the X, and then there is only one way to join the two remaining lines to each other. The X comes divided by 4!=4*3*2, but the number of ways to link up the X half lines to make the diagram is only 4*3, so the contribution of this diagram is divided by two.

For another example, consider the diagram formed by joining all the half-lines of one X to all the half-lines of another X. This diagram is called a "vacuum bubble", because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two X's) and two factors of 4!. The contribution is multiplied by 4!/(2*4!*4!) = 1/48.

Another example is the Feynman diagram formed from two X's where each X links up to two external lines, and the remaining two half-lines of each X are joined to each other. The number of ways to link an X to two external lines is 4*3, and either X could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two X's can be linked to each other in two ways, so that the total number of ways to form the diagram is 4*3*4*3*2*2, while the denominator is 4!4!2!. The total symmetry factor is 2, and the contribution of this diagram is divided by two.

The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has.

An automorphism of a Feynman graph is a permutation M of the lines and a permutation N of the vertices with the following properties:

# If a line l goes from vertex v to vertex v', then M(l) goes from N(v) to N(v'). If the line is undirected, as it is for a real scalar field, then M(l) can go from N(v') to N(v) too.
# If a line l ends on an external line, M(l) ends on the same external line.
# If there are different types of lines, M(l) should preserve the type.

This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states which only differ by interchanging identical particles.

Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking the a half-line to a name and then to the other half line.

Now count the number of ways to form the named diagram. Each permutation of the X's gives a different pattern of linking names to half-lines, and this is a factor of n!. Each permutation of the half-lines in a single X gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion.

But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph.

Connected diagrams: Linked cluster theorem

A diagram is "connected" when it is connected as a graph, meaning that there is a sequence of attached lines and vertices which link any line or vertex to any other. The connected diagrams suffice to reconstruct the full Feynman series, and this is the "linked cluster theorem".

The full series is the sum over all diagrams, which include several connected components, each one can occur multiple times. The automorphism of the full graph consists of the automorphisms of the connected components, and an extra factor of n! for permutations of n identical copies of one connected component.

:: sum prod_i {C_{i}^{n_i} over n_i!}

But this can be seen to be a product of separate factors for each connected graph:

:: prod_i sum_j {C_i^{n_i} over n_i!} = prod_i exp(C_i) = exp(sum_i C_i).

This is the linked cluster theorem: the sum of all diagrams is the exponential of the connected ones.

Vacuum Bubbles

An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals:

:: langle phi_1(x_1) ... phi_n(x_n) angle = {int e^{-S} phi_1(x_1) ...phi_n(x_n) Dphi over int e^{-S} Dphi}.

The top is the sum over all Feynman diagrams, including disconnected diagrams which do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator:

:: int e^{-S}phi_1(x_1)...phi_n(x_n) angle = (sum E_i)( exp(sum_i C_i) ).

Where the sum over E diagrams includes only those diagrams each of whose connected components end on at least on external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor.

The vacuum bubbles then are only useful for determining Z itself, which from the definition of the path integral is equal to:

:: Z= int e^{-S} Dphi = e^{-HT} = e^{- ho V}

where ho is the energy density in the vacuum. Each vacuum bubble contains a factor of delta(k) zeroing the total k at each vertex, and when there are no external lines, this contains a factor of delta(0), because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum.

Sources

Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing "sources" unifies the formalism, by making new vertices where one line can end.

Sources are external fields, fields which contribute to the action, but are not dynamical variables. A scalar field source is another scalar field h which contributes a term to the (Lorentz) Lagrangian:

:: int h(x) phi(x) d^dx = int h(k) phi(k) d^dk ,

In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an X vertex, or on an H-vertex, and only one line enters an H vertex. The Feynman rule for an H-vertex is that a line from an H with momentum k gets a factor of h(k).

The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "x" with one line extending out, exactly as an insertion.

:: log(Z [h] ) = sum_{n,C} h(k_1) h(k_2) ... h(k_n) C(k_1,...,k_n),

where C(k_1,....,k_n) is the connected diagram with n external lines carrying momentum as indicated. The sum is over all connected diagrams, as before.

The field h is not dynamical, which means that there is no path integral over h: h is just a parameter in the Lagrangian which varies from point to point. The path integral for the field is:

:: Z [h] = int e^{iS + iint hphi} Dphi ,

and it is a function of the values of h at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on R^n, the Fourier transform of the probability density is:

:: int ho(y) e^{i k y} d^n y = langle e^{i k y} angle = langle prod_{i=1}^{n} e^{h_i y_i} angle ,

The fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source h(x) is:

:: Z [h] = int e^{iS} e^{iint_x h(x)phi(x)} Dphi = langle e^{i h phi } angle

which, on a lattice, is the product of an oscillatory exponential for each field value:

:: langle prod_x e^{i h_x phi_x} angle

The fourier transform of a delta-function is a constant, which gives a formal expression for a delta function:

:: delta(x-y) = int e^{k(x-y)} dk

This tells you what a field delta function looks like in a path-integral. For two scalar fields phi and eta,

:: delta(phi - eta) = int e^{ i h(x)(phi(x) -eta(x)d^dx} Dh

Which integrates over the Fourier transform coordinate, over h. This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral.

The partition function is now a function of the field h, and the physical partition function is the value when h is the zero function:

The correlation functions are derivatives of the path integral with respect to the source:

:: langlephi(x) angle = {1over Z} {partial over partial h(x)} Z [h] = {partialoverpartial h(x)} log(Z [h] )

Spin 1/2: Grassman integrals

The preceding discussion can be extended to the Fermi case, but only if the notion of integration is expanded.

Particle-Path Interpretation

A Feynman diagram is a representation of quantum field theory processes in terms of particle paths.

In a Feynman diagram, particles are represented by lines, which can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is referred to as an interaction vertex, or vertex for short. There are three different types of lines: internal lines connect two vertices, incoming lines extend from "the past" to a vertex and represent an initial state, and outgoing lines extend from a vertex to "the future" and represent the final state.

There are several conventions for where to represent the past and the future. Sometimes, the bottom of the diagram represents the past and the top of the diagram represents the future. Other times, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, there is no past and future and all the lines are internal. Then the particle lines begin and end on small x's, which represent the positions of the operators whose correlation is being calculated. The LSZ reduction formula is the standardized argument that shows that the correlation functions and scattering diagrams are the same.

Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process which can happen in several different ways. When a group of incoming particles are to scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time. In a perturbative expansion of the scattering amplitude for the experiment defined by the incoming and outgoing lines. In some quantum field theories (notably quantum electrodynamics), one can obtain an excellent approximation of the scattering amplitude from a few terms of the perturbative expansion, corresponding to a few simple Feynman diagrams with the same incoming and outgoing lines connected by different vertices and internal lines.

The method, although originally invented for particle physics, is useful in any part of physics where there are statistical or quantum fields. In condensed matter physics, there are many-body Feynman diagrams with dashed lines which represent an instantaneous potential interaction, while "phonons" take the place of "photons". In statistical physics, there are statistical Feynman diagrams which represent the way in which correlations travel along paths.

Feynman diagrams are often confused with spacetime diagrams and bubble chamber images because they all seek to represent particle scattering. Feynman diagrams are graphs that represent the trajectories of particles in intermediate stages of a scattering process. Unlike a bubble chamber picture, only the sum of all the Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition--- every diagram contributes a factor to the total amplitude for the process.

Scattering

The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space.

In the 1930's, Wigner gave a mathematical definition for single-particle states: they are a collection of states which form an irreducible representation of the Poincare group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accomodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories.

A field operator can act to produce a one-particle state from the vacuum, which means that the field operator phi(x) produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle,5-particle (if there is no +/- symmetry also 2,4,6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections.

The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for n particles to go to m-particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for n+m field insertions, leaving out the propagators for the external legs.

For example, for the lambda phi^4 interaction of the previous section, the order lambda contribution to the (Lorentz) correlation function is:

:: langle phi(k_1)phi(k_2)phi(k_3)phi(k_4) angle = {iover k_1^2}{iover k_2^2} {iover k_3^2} {iover k_4^2} ilambda ,

Stripping off the external propagators, that is, removing the factors of i/k^2, gives the invariant scattering amplitude M:

:: M = ilambda ,

which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of |M|^2 over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that M is a relativistic invariant.

Non-relativistic single particle states are labeled by the momentum k, and they are chosen to have the same norm at every value of k. This is because the nonrelativistic unit operator on single particle states is:

:: int dk |k anglelangle k|,

In relativity, the integral over k states for a particle of mass m integrates over a hyperbola in E,k space defined by the energy-momentum relation:

:: E^2 - k^2 = m^2 ,

If the integral weighs each k point equally, the measure is not Lorentz invariant. The invariant measure integrates over all values of k and E, restricting to the hyperbola with a Lorentz invariant delta function:

:: int delta(E^2-k^2 - m^2) |E,k anglelangle E,k| dE dk = int {dk over 2 E} |k anglelangle k|

So the normalized k-states are different from the relativistically normalized k-states by a factor of sqrt{E} = (k^2-m^2)^{1over 4}

The invariant amplitude M is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states.

For nonrelativistic values of k, the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor sqrt{m} ). In this limit, the phi^4 invariant scattering amplitude is still constant. The particles created by the field phi scatter in all directions with equal amplitude.

The nonrelativistic potential which scatters in all directions with an equal amplitude (in the Born approximation) is one whose Fourier transform is constant--- a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of the this theory--- it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time.

LSZ theorem

Nonperturbative effects

Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect which goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.

But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations which on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes which involve large numbers of particles, but where the interactions between each of the particles is simple.

This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe-Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman Vainshtein Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way.

The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available.

Mathematical details

A Feynman diagram can be considered a graph. When considering a field composed of particles, the edges will represent (sections of) particle world lines; the vertices represent virtual interactions. Since only certain interactions are permitted, the graph is constrained to have only certain types of vertices. The type of field of an edge is its field label; the permitted types of interaction are interaction labels.

The value of a given diagram can be derived from the graph; the value of the interaction as a whole is obtained by summing over all diagrams.

Mathematical interpretation

Feynman diagrams are really a graphical way of keeping track of deWitt indices, much like Penrose's graphical notation for indices in multilinear algebra. There are several different types for the indices, one for each field (this does not depend on how the fields are grouped; for instance, if the up quark field and down quark field are treated as different fields, then there would be the same type assigned to both of them but if they are treated as a single multicomponent field with "flavors", then there would be a problem). The edges, (i.e., propagators) are tensors of rank (2,0) in deWitt's notation (i.e., with two contravariant indices and no covariant indices), while the vertices of degree n are rank n covariant tensors which are totally symmetric among all bosonic indices of the same type and totally antisymmetric among all fermionic indices of the same type and the contraction of a propagator with a rank n covariant tensor is indicated by an edge incident to a vertex (there is no ambiguity in which "slot" to contract with because the vertices correspond to totally symmetric tensors). The external vertices correspond to the uncontracted contravariant indices.

A derivation of the Feynman rules using Gaussian functional integrals is given in the functional integral article.

Each Feynman diagram on its own does not have a physical significance. It's only the infinite sum over all possible (bubble-free) Feynman diagrams which gives physical results. This infinite sum is usually only asymptotically convergent.

ee also

*Stückelberg-Feynman interpretation
*Invariance mechanics
*Penguin diagram

Notes

References

* Gerardus 't Hooft, Martinus Veltman, "Diagrammar", CERN Yellow Report 1973, [http://preprints.cern.ch/cgi-bin/setlink?base=cernrep&categ=Yellow_Report&id=1973-009 online]
* David Kaiser, "Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics", Chicago: University of Chicago Press, 2005. ISBN 0-226-42266-6
* Martinus Veltman, "Diagrammatica: The Path to Feynman Diagrams", Cambridge Lecture Notes in Physics, ISBN 0-521-45692-4 (expanded, updated version of above)

External links

* [http://www2.slac.stanford.edu/vvc/theory/feynman.html Feynman diagram page] at SLAC
* [http://www.ams.org/featurecolumn/archive/feynman1.html AMS article: "What's New in Mathematics: Finite-dimensional Feynman Diagrams"]
* [http://wikisophia.org/wiki/Wikitex_Feyn WikiTeX] supports editing Feynman diagrams directly in Wiki articles.
* [http://feyndiagram.com/ Drawing Feynman diagrams with FeynDiagram] C++ library that produces PostScript output.
* [http://cnlart.web.cern.ch/cnlart/220/node60.html#SECTION00713000000000000000000 Feynman Diagram Examples] using Thorsten Ohl's Feynmf LaTeX package.
* [http://jaxodraw.sourceforge.net/ JaxoDraw] A Java program for drawing Feynman diagrams.


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Feynman diagram — Physics. a network of lines that represents a series of emissions and absorptions of elementary particles by other elementary particles, from which the probability of the series can be calculated. Also called Feynman graph. [1965 70; named after… …   Universalium

  • Feynman diagram — Feinmano diagrama statusas T sritis fizika atitikmenys: angl. Feynman diagram; Feynman graph vok. Feynman Graph, m; Feynman Diagramm, n rus. диаграмма Фейнмана, f; фейнмановская диаграмма, f pranc. diagramme de Feynman, m; graphe de Feynman, m …   Fizikos terminų žodynas

  • feynman diagram — noun also feynman graph ˈfīnmən Usage: usually capitalized F Etymology: after Richard Feynman died 1988 American physicist : a diagram of subatomic particle interactions in which lines represent particles and points where lines meet represent… …   Useful english dictionary

  • Feynman diagram — noun Physics a diagram showing electromagnetic interactions between subatomic particles. Origin named after the American theoretical physicist Richard P. Feynman (1918–88) …   English new terms dictionary

  • Feynman diagram — noun a pictorial representation of the interactions of subatomic particles, showing their paths in space and time as lines, and their interactions as points where lines meet …   Wiktionary

  • One-loop Feynman diagram — In physics, a one loop Feynman diagram is a connected Feynman diagram with only one cycle (unicyclic). Such a diagram can be obtained from a connected tree diagram by taking two external lines of the same type and joining them together into an… …   Wikipedia

  • Feynman — may refer to: * Richard Feynman ** Feynman diagram ** Feynman graph ** Feynman Kac formula ** The Feynman Lectures on Physics ** Feynman( s) (path) integral, see Path integral formulation ** Feynman parametrization ** Feynman checkerboard **… …   Wikipedia

  • Feynman parametrization — is a technique for evaluating loop integrals which arise from Feynman diagrams with one or more loops. However, it is sometimes useful in integration in areas of pure mathematics too.Richard Feynman observed that::frac{1}{AB}=int^1 0… …   Wikipedia

  • Feynman graph — A Feynman graph is a graph suitable to be a Feynman diagram in a particular application of quantum field theory. (The most common use is when each field has quanta (particles) associated with it as the quantum of the electromagnetic field is a… …   Wikipedia

  • Feynman, Richard P. — ▪ American physicist in full  Richard Phillips Feynman   born May 11, 1918, New York, New York, U.S. died February 15, 1988, Los Angeles, California       American theoretical physicist who was widely regarded as the most brilliant, influential,… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”