Solving quadratic equations with continued fractions

Solving quadratic equations with continued fractions

In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is

:ax^2+bx+c=0,,!

where "a" ≠ 0.

Students and teachers all over the world are familiar with the quadratic formula that can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are often expressed in a form that involves a quadratic irrational number, which can only be evaluated as a fraction or as a decimal fraction by applying an additional root extraction algorithm.

There is another way to solve the general quadratic equation. This old technique obtains an excellent rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions.

A simple example

Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. Let's begin with the equation

:x^2 = 2,

and manipulate it directly. Subtracting one from both sides we obtain

:x^2 - 1 = 1.,

This is easily factored into

:(x+1)(x-1) = 1,

from which we obtain

:(x-1) = frac{1}{1+x},

and finally

:x = 1+frac{1}{1+x}.,

Now comes the crucial step. Let's substitute this expression for "x" back into itself, recursively, to obtain

:x = 1+cfrac{1}{1+left(1+cfrac{1}{1+x} ight)} = 1+cfrac{1}{2+cfrac{1}{1+x.,

But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity "x" as far down and to the right as we please, and obtaining in the limit the infinite continued fraction

:x = 1 + cfrac{1}{2 + cfrac{1}{2 + cfrac{1}{2 + cfrac{1}{2 + cfrac{1}{ddots}=sqrt{2}.,

By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers.

An algebraic explanation

We can gain further insight into this simple example by considering the successive powers of

:omega = sqrt{2} - 1.,

That sequence of successive powers is given by

:egin{align}omega^2& = 3 - 2sqrt{2}, & omega^3& = 5sqrt{2} - 7, & omega^4& = 17 - 12sqrt{2}, \omega^5& = 29sqrt{2}-41, & omega^6& = 99 - 70sqrt{2}, & omega^7& = 169sqrt{2} - 239, ,end{align}

and so forth. Notice how the fractions derived as successive approximants to √2 also pop out of this geometric progression.

Since 0 < "ω" < 1, the sequence {"ω""n"} clearly tends toward zero, by well-known properties of the positive real numbers. This fact can be used to prove, rigorously, that the convergents discussed in the simple example above do in fact converge to √2, in the limit.

We can also find these numerators and denominators popping out of the successive powers of

:omega^{-1} = sqrt{2} + 1.,

Interestingly, the sequence of successive powers {"ω"−"n"} does not approach zero; it grows without limit instead. But it can still be used to obtain the convergents in our simple example.

Notice also that the set obtained by forming all the combinations "a" + "b"√2, where "a" and "b" are integers, is an example of an object known in abstract algebra as a ring, and more specifically as an integral domain. The number ω is a unit in that integral domain. See also algebraic number field.

The general quadratic equation

Continued fractions are most conveniently applied to solve the general quadratic equation expressed in the form of a monic polynomial

:x^2 + bx + c = 0,

which can always be obtained by dividing the original equation by its leading coefficient. Starting from this monic equation we see that

:egin{align}x^2 + bx& = -c\x + b& = frac{-c}{x}\x& = -b - frac{c}{x},end{align}

But now we can apply the last equation to itself recursively to obtain

:x = -b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{ddots,

If this infinite continued fraction converges at all, it must converge to one of the roots of the monic polynomial "x"2 + "bx" + "c" = 0. Unfortunately, this particular continued fraction does not converge to a finite number in every case. We can easily see that this is so by considering the quadratic formula and a monic polynomial with real coefficients. If the discriminant of such a polynomial is negative, then both roots of the quadratic equation have imaginary parts. In particular, if "b" and "c" are real numbers and "b"2 - 4"c" < 0, all the convergents of this continued fraction "solution" will be real numbers, and they cannot possibly converge to a root of the form "u" + "iv" (where "v" ≠ 0), which does not lie on the real number line.

A general theorem

By applying a result obtained by Euler in 1748 it can be shown that the continued fraction solution to the general monic quadratic equation with real coefficients

:x^2 + bx + c = 0,

given by

:x = -b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{ddots,

converges or diverges depending on both the coefficient "b" and the value of the discriminant, "b"2 − 4"c".

If "b" = 0 the general continued fraction solution is totally divergent; the convergents alternate between 0 and infty. If "b" ≠ 0 we distinguish three cases.

#If the discriminant is negative, the fraction diverges by oscillation, which means that its convergents wander around in a regular or even chaotic fashion, never approaching a finite limit.
#If the discriminant is zero the fraction converges to the single root of multiplicity two.
#If the discriminant is positive the equation has two real roots, and the continued fraction converges to the larger (in absolute value) of these. The rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.

When the monic quadratic equation with real coefficients is of the form "x"2 = "c", the general solution described above is useless because division by zero is not well defined. As long as "c" is positive, though, it is always possible to transform the equation by subtracting a perfect square from both sides and proceeding along the lines illustrated with √2 above. In symbols, if

:x^2 = cqquad(c>0),

just choose some positive real number "p" such that

:p^2 < c.,

Then by direct manipulation we obtain

:egin{align}x^2-p^2& = c-p^2\(x+p)(x-p)& = c-p^2\x-p& = frac{c-p^2}{p+x}\x& = p + frac{c-p^2}{p+x}\& = p + cfrac{c-p^2}{p + left(p + cfrac{c-p^2}{p+x} ight)}& = p + cfrac{c-p^2}{2p + cfrac{c-p^2}{2p + cfrac{c-p^2}{ddots,},end{align}

and this transformed continued fraction must converge because all the partial numerators and partial denominators are positive real numbers.

Complex coefficients

By the fundamental theorem of algebra, if the monic polynomial equation "x"2 + "bx" + "c" = 0 has complex coefficients, it must have two (not necessarily distinct) complex roots. Unfortunately, the discriminant "b"2 - 4"c" is not as useful in this situation, because it may be a complex number. Still, a modified version of the general theorem can be proved.

The continued fraction solution to the general monic quadratic equation with complex coefficients

:x^2 + bx + c = 0qquad (b e0),

given by

:x = -b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{-b - cfrac{c}{ddots,

converges or diverges depending on the value of the discriminant, "b"2 − 4"c", and on the relative magnitude of its two roots.

Denoting the two roots by "r"1 and "r"2 we distinguish three cases.
#If the discriminant is zero the fraction converges to the single root of multiplicity two.
#If the discriminant is not zero, and |"r"1| &ne; |"r"2|, the continued fraction converges to the "root of maximum modulus" (i.e., to the root with the greater absolute value).
#If the discriminant is not zero, and |"r"1| = |"r"2|, the continued fraction diverges by oscillation.

In case 2, the rate of convergence depends on the absolute value of the ratio between the two roots: the farther that ratio is from unity, the more quickly the continued fraction converges.

This general solution of monic quadratic equations with complex coefficients is usually not very useful for obtaining rational approximations to the roots, because the criteria are circular (that is, the relative magnitudes of the two roots must be known before we can conclude that the fraction converges, in most cases). But this solution does find useful applications in the further analysis of the convergence problem for continued fractions with complex elements.

See also

* Continued fraction
* Generalized continued fraction
* Lucas sequence
* Pell's equation
* Quadratic equation

References

*H. S. Wall, "Analytic Theory of Continued Fractions", D. Van Nostrand Company, Inc., 1948 ISBN 0-8284-0207-8


Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Quadratic equation — This article is about quadratic equations and solutions. For more general information about quadratic functions, see Quadratic function. For more information about quadratic polynomials, see Quadratic polynomial. In mathematics, a quadratic… …   Wikipedia

  • Generalized continued fraction — In analysis, a generalized continued fraction is a generalization of regular continued fractions in canonical form in which the partial numerators and the partial denominators can assume arbitrary real or complex values.A generalized continued… …   Wikipedia

  • Square root — Measured fall time of a small steel sphere falling from various heights. The data is in good agreement with the predicted fall time of , where h is the height and g is the acceleration of gravity. In mathematics, a square root of a number x is a… …   Wikipedia

  • List of mathematics articles (S) — NOTOC S S duality S matrix S plane S transform S unit S.O.S. Mathematics SA subgroup Saccheri quadrilateral Sacks spiral Sacred geometry Saddle node bifurcation Saddle point Saddle surface Sadleirian Professor of Pure Mathematics Safe prime Safe… …   Wikipedia

  • History of algebra — Elementary algebra is the branch of mathematics that deals with solving for the operands of arithmetic equations. Modern or abstract algebra has its origins as an abstraction of elementary algebra. Historians know that the earliest mathematical… …   Wikipedia

  • algebra — /al jeuh breuh/, n. 1. the branch of mathematics that deals with general statements of relations, utilizing letters and other symbols to represent specific sets of numbers, values, vectors, etc., in the description of such relations. 2. any of… …   Universalium

  • Indian mathematics — mdash;which here is the mathematics that emerged in South Asia from ancient times until the end of the 18th century mdash;had its beginnings in the Bronze Age Indus Valley civilization (2600 1900 BCE) and the Iron Age Vedic culture (1500 500 BCE) …   Wikipedia

  • History of mathematics — A proof from Euclid s Elements, widely considered the most influential textbook of all time.[1] …   Wikipedia

  • mathematics, South Asian — Introduction       the discipline of mathematics as it developed in the Indian (India) subcontinent.       The mathematics of classical Indian civilization is an intriguing blend of the familiar and the strange. For the modern individual, Indian… …   Universalium

  • Mathematics education — A mathematics lecture at Aalto University School of Science and Technology. Educational Research …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”