Large numbers

Large numbers

This article is about large numbers in the sense of numbers that are significantly larger than those ordinarily used in everyday life, for instance in simple counting or in monetary transactions. The term typically refers to large positive integers, or more generally, large positive real numbers, but it may also be used in other contexts.

Very large numbers often occur in fields such as mathematics, cosmology, cryptography and statistical mechanics. Sometimes people refer to numbers as being "astronomically large". However, it is easy to mathematically define numbers that are much larger even than those used in astronomy.

Contents

Using scientific notation to handle large and small numbers

Scientific notation was created to handle the wide range of values which occur in scientific study. 1.0 × 109, for example, means one billion, a 1 followed by nine zeros: 1 000 000 000, and 1.0 × 10−9 means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is.

Large numbers in the everyday world

Examples of large numbers describing everyday real-world objects are:

  • the number of bits on a computer hard disk (as of 2010, typically about 1013, 500-1000 GB)
  • the number of cells in the human body (more than 1014)
  • the number of neuronal connections in the human brain (estimated at 1014)
  • The Avogadro constant, the number of "elementary entities" (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12; (approximately 6.022 × 1023 per mole)

Astronomically large numbers

Other large numbers, as regards length and time, are found in astronomy and cosmology. For example, the current Big Bang model of the Universe suggests that it is 13.7 billion years (4.3 × 1017 seconds) old, and that the observable universe is 93 billion light years across (8.8 × 1026 metres), and contains about 5 × 1022 stars, organized into around 125 billion (1.25 × 1011) galaxies, according to Hubble Space Telescope observations. There are about 1080 fundamental particles in the observable universe, by rough estimation.[citation needed]

According to Don Page, physicist at the University of Alberta, Canada, the longest finite time that has so far been explicitly calculated by any physicist is

10^{10^{10^{10^{10^{1.1}}}}} \mbox{ years}

which corresponds to the scale of an estimated Poincaré recurrence time for the quantum state of a hypothetical box containing a black hole with the estimated mass of the entire universe, observable or not, assuming a certain inflationary model with an inflaton whose mass is 10−6 Planck masses.[1][2] This time assumes a statistical model subject to Poincaré recurrence. A much simplified way of thinking about this time is in a model where our universe's history repeats itself arbitrarily many times due to properties of statistical mechanics, this is the time scale when it will first be somewhat similar (for a reasonable choice of "similar") to its current state again.

Combinatorial processes rapidly generate even larger numbers. The factorial function, which defines the number of permutations on a set of fixed objects, grows very rapidly with the number of objects. Stirling's formula gives a precise asymptotic expression for this rate of growth.

Combinatorial processes generate very large numbers in statistical mechanics. These numbers are so large that they are typically only referred to using their logarithms.

Gödel numbers, and similar numbers used to represent bit-strings in algorithmic information theory, are very large, even for mathematical statements of reasonable length. However, some pathological numbers are even larger than the Gödel numbers of typical mathematical propositions.

Computers and computational complexity

Moore's Law, generally speaking, estimates that the number of transistors on a square inch of a microprocessor will double about every 18 months. This sometimes leads people to believe that eventually, computers will be able to solve any mathematical problem, no matter how complicated (See Turing Test). This is not the case; computers are fundamentally limited by the constraints of physics, and certain upper bounds on what to expect can reasonably be formulated. Also, there are certain theoretical results which show that some problems are inherently beyond the reach of complete computational solution, no matter how powerful or fast the computation; see n-body problem.

Between 1980 and 2000, hard disk sizes increased from about 10 megabytes (1 × 107) to over 100 gigabytes (1011 bytes). A 100 gigabyte disk could store the given names of all of Earth's six billion inhabitants without using data compression. But what about a dictionary-on-disk storing all possible passwords containing up to 40 characters? Assuming each character equals one byte, there are about 2320 such passwords, which is about 2 × 1096. In his paper Computational capacity of the universe,[3] Seth Lloyd points out that if every particle in the universe could be used as part of a huge computer, it could store only about 1090 bits, less than one millionth of the size such a dictionary would require. However, storing information on hard disk and computing it are very different functions. On the one hand storage currently has limitations as stated, but computational speed is a different matter. It is quite conceivable that the stated limitations regarding storage have no bearing on the limitations of actual computational capacity; especially if the current research into quantum computers results in a "quantum leap".

Still, computers can easily be programmed to start creating and displaying all possible 40-character passwords one at a time. Such a program could be left to run indefinitely. Assuming a modern PC could output 1 billion strings per second, it would take one billionth of 2 × 1096 seconds, or 2 × 1087 seconds to complete its task, which is about 6 × 1079 years. By contrast, the universe is estimated to be 13.7 billion (1.37 × 1010) years old. Computers will presumably continue to get faster, but the same paper mentioned before estimates that the entire universe functioning as a giant computer could have performed no more than 10120 operations since the Big Bang. This is trillions of times more computation than is required for displaying all 40-character passwords, but computing all 50 character passwords would outstrip the estimated computational potential of the entire universe.

Problems like this grow exponentially in the number of computations they require, and are one reason why exponentially difficult problems are called "intractable" in computer science: for even small numbers like the 40 or 50 characters described earlier, the number of computations required exceeds even theoretical limits on mankind's computing power. The traditional division between "easy" and "hard" problems is thus drawn between programs that do and do not require exponentially increasing resources to execute.

Such limits are an advantage in cryptography, since any cipher-breaking technique which requires more than, say, the 10120 operations mentioned before will never be feasible. Such ciphers must be broken by finding efficient techniques unknown to the cipher's designer. Likewise, much of the research throughout all branches of computer science focuses on finding efficient solutions to problems that work with far fewer resources than are required by a naïve solution. For example, one way of finding the greatest common divisor between two 1000 digit numbers is to compute all their factors by trial division. This will take up to 2 × 10500 division operations, far too large to contemplate. But the Euclidean algorithm, using a much more efficient technique, takes only a fraction of a second to compute the GCD for even huge numbers such as these.

As a general rule, then, PCs in 2005 can perform 240 calculations in a few minutes. A few thousand PCs working for a few years could solve a problem requiring 264 calculations, but no amount of traditional computing power will solve a problem requiring 2128 operations (which is about what would be required to brute-force the encryption keys in 128-bit SSL commonly used in web browsers, assuming the underlying ciphers remain secure). Limits on computer storage are comparable. Quantum computers may allow certain problems to become feasible, but have practical and theoretical challenges which may never be overcome.

Examples

  • 1010 (10,000,000,000), called "10 billion" (or sometimes, 10,000 million in the long scale).
  • googol = 10100 (10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000)
  • centillion = 10303 or 10600, depending on number naming system
  • googolplex = 10^{\mbox{googol}}=10^{10^{100}}
  • Skewes' numbers: the first is approximately 10^{10^{10^{34}}}, the second 10^{10^{10^{1000}}}

The total amount of printed material in the world is roughly 1.6 × 1018 bits[citation needed]; therefore the contents can be represented by a number somewhere in the range 0 to roughly 2^{1.6 \times 10^{18}}\approx 10^{4.8 \times 10^{17}}

Compare:

  • 1.1^{1.1^{1.1^{1000}}} \approx 10^{10^{1.02\times10^{40}}}
  • 1000^{1000^{1000}}\approx 10^{10^{3000.48}}

The first number is much larger than the second, due to the larger height of the power tower, and in spite of the small numbers 1.1. In comparing the magnitude of each successive exponent in the last number with 10^{10^{10}}, we find a difference in the magnitude of effect on the final exponent.

Systematically creating ever faster increasing sequences

Given a strictly increasing integer sequence/function f0(n) (n≥1) we can produce a faster growing sequence f_1(n) = f_0^n(n) (where the superscript n denotes the nth functional power). This can be repeated any number of times by letting f_k(n) = f_{k-1}^n(n), each sequence growing much faster than the one before it. Then we could define fω(n) = fn(n), which grows much faster than any fk for finite k (here ω is the first infinite ordinal number, representing the limit of all finite numbers k). This is the basis for the fast-growing hierarchy of functions, in which the indexing subscript is extended to ever-larger ordinals.

For example, starting with f0(n) = n + 1:

  • f1(n) = f0n(n) = n + n = 2n
  • f2(n) = f1n(n) = 2nn > (2 ↑) n for n ≥ 2 (using Knuth up-arrow notation)
  • f3(n) = f2n(n) > (2 ↑)n n ≥ 2 ↑2 n for n ≥ 2.
  • fk+1(n) > 2 ↑k n for n ≥ 2, k < ω.
  • fω(n) = fn(n) > 2 ↑n - 1 n > 2 ↑n − 2 (n + 3) − 3 = A(n, n) for n ≥ 2, where A is the Ackermann function (of which fω is a unary version).
  • fω+1(64) > fω64(6) > Graham's number (= g64 in the sequence defined by g0 = 4, gk+1 = 3 ↑gk 3).
    • This follows by noting fω(n) > 2 ↑n - 1 n > 3 ↑n - 2 3 + 2, and hence fω(gk + 2) > gk+1 + 2.
  • fω(n) > 2 ↑n - 1 n = (2 → nn-1) = (2 → nn-1 → 1) (using Conway chained arrow notation)
  • fω+1(n) = fωn(n) > (2 → nn-1 → 2) (because if gk(n) = X → nk then X → nk+1 = gkn(1))
  • fω+k(n) > (2 → nn-1 → k+1) > (nnk)
  • fω2(n) = fω+n(n) > (nnn) = (nnn→ 1)
  • fω2+k(n) > (nnnk)
  • fω3(n) > (nnnn)
  • fωk(n) > (nn → ... → nn) (Chain of k+1 n's)
  • fω2(n) = fωn(n) > (nn → ... → nn) (Chain of n+1 n's)

Standardized system of writing very large numbers

A standardized way of writing very large numbers allows them to be easily sorted in increasing order, and one can get a good idea of how much larger a number is than another one.

To compare numbers in scientific notation, say 5×104 and 2×105, compare the exponents first, in this case 5 > 4, so 2×105 > 5×104. If the exponents are equal, the mantissa (or coefficient) should be compared, thus 5×104 > 2×104 because 5 > 2.

Tetration with base 10 gives the sequence 10 \uparrow \uparrow n=10 \to n \to 2=(10\uparrow)^n 1, the power towers of numbers 10, where (10\uparrow)^n denotes a functional power of the function f(n) = 10n (the function also expressed by the suffix "-plex" as in googolplex, see the Googol family).

These are very round numbers, each representing an order of magnitude in a generalized sense. A crude way of specifying how large a number is, is specifying between which two numbers in this sequence it is.

More accurately, numbers in between can be expressed in the form (10\uparrow)^n a, i.e., with a power tower of 10s and a number at the top, possibly in scientific notation, e.g. 10^{10^{10^{10^{10^{4.829}}}}}, a number between 10\uparrow\uparrow 5 and 10\uparrow\uparrow 6 (note that 10 \uparrow\uparrow n < (10\uparrow)^n a < 10 \uparrow\uparrow (n+1) if 1 < a < 10). (See also extension of tetration to real heights.)

Thus googolplex is 10^{10^{100}} = (10\uparrow)^2 100 = (10\uparrow)^3 2

Another example:

2 \uparrow\uparrow\uparrow 4 = 
  \begin{matrix}
   \underbrace{2_{}^{2^{{}^{.\,^{.\,^{.\,^2}}}}}}\\
   \qquad\quad\ \ \ 65,536\mbox{ copies of }2  \end{matrix}
  \approx (10\uparrow)^{65,531}(6.0 \times 10^{19,728}) \approx (10\uparrow)^{65,533} 4.3
(between 10\uparrow\uparrow 65,533 and 10\uparrow\uparrow 65,534)

Thus the "order of magnitude" of a number (on a larger scale than usually meant), can be characterized by the number of times (n) one has to take the log10 to get a number between 1 and 10. Thus, the number is between 10\uparrow\uparrow n and 10\uparrow\uparrow (n+1). As explained, a more accurate description of a number also specifies the value of this number between 1 and 10, or the previous number (taking the logarithm one time less) between 10 and 1010, or the next, between 0 and 1.

Note that

10^{(10\uparrow)^{n}x}=(10\uparrow)^{n}10^x

I.e., if a number x is too large for a representation (10\uparrow)^{n}x we can make the power tower one higher, replacing x by log10x, or find x from the lower-tower representation of the log10 of the whole number. If the power tower would contain one or more numbers different from 10, the two approaches would lead to different results, corresponding to the fact that extending the power tower with a 10 at the bottom is then not the same as extending it with a 10 at the top (but, of course, similar remarks apply if the whole power tower consists of copies of the same number, different from 10).

If the height of the tower is large, the various representations for large numbers can be applied to the height itself. If the height is given only approximately, giving a value at the top does not make sense, so we can use the double-arrow notation, e.g. 10\uparrow\uparrow(7.21\times 10^8). If the value after the double arrow is a very large number itself, the above can recursively be applied to that value.

Examples:

10\uparrow\uparrow 10^{\,\!10^{10^{3.81\times 10^{17}}}} (between 10\uparrow\uparrow\uparrow 2 and 10\uparrow\uparrow\uparrow 3)
10\uparrow\uparrow 10\uparrow\uparrow (10\uparrow)^{497}(9.73\times 10^{32})=(10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32}) (between 10\uparrow\uparrow\uparrow 4 and 10\uparrow\uparrow\uparrow 5)

Similarly to the above, if the exponent of (10\uparrow) is not exactly given then giving a value at the right does not make sense, and we can, instead of using the power notation of (10\uparrow), add 1 to the exponent of (10\uparrow\uparrow), so we get e.g. (10\uparrow\uparrow)^{3} (2.8\times 10^{12}).

If the exponent of (10\uparrow \uparrow) is large, the various representations for large numbers can be applied to this exponent itself. If this exponent is not exactly given then, again, giving a value at the right does not make sense, and we can, instead of using the power notation of (10\uparrow \uparrow), use the triple arrow operator, e.g. 10\uparrow\uparrow\uparrow(7.3\times 10^{6}).

If the right-hand argument of the triple arrow operator is large the above applies to it, so we have e.g. 10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32}) (between 10\uparrow\uparrow\uparrow 10\uparrow\uparrow\uparrow 4 and 10\uparrow\uparrow\uparrow 10\uparrow\uparrow\uparrow 5). This can be done recursively, so we can have a power of the triple arrow operator.

We can proceed with operators with higher numbers of arrows, written \uparrow^n.

Compare this notation with the hyper operator and the Conway chained arrow notation:

a\uparrow^n b = ( abn ) = hyper(an + 2, b)

An advantage of the first is that when considered as function of b, there is a natural notation for powers of this function (just like when writing out the n arrows): (a\uparrow^n)^k b. For example:

(10\uparrow^2)^3 b = ( 10 → ( 10 → ( 10 → b → 2 ) → 2 ) → 2 )

and only in special cases the long nested chain notation is reduced; for b = 1 we get:

10\uparrow^3 3 = (10\uparrow^2)^3 1 = ( 10 → 3 → 3 )

Since the b can also be very large, in general we write a number with a sequence of powers (10 \uparrow^n)^{k_n} with decreasing values of n (with exactly given integer exponents kn) with at the end a number in ordinary scientific notation. Whenever a kn is too large to be given exactly, the value of kn + 1 is increased by 1 and everything to the right of ({n+1})^{k_{n+1}} is rewritten.

For describing numbers approximately, deviations from the decreasing order of values of n are not needed. For example, 10 \uparrow (10 \uparrow \uparrow)^5 a=(10 \uparrow \uparrow)^6 a, and 10 \uparrow (10 \uparrow \uparrow \uparrow 3)=10 \uparrow \uparrow (10 \uparrow \uparrow 10 + 1)\approx 10 \uparrow \uparrow \uparrow 3. Thus we have the somewhat counterintuitive result that a number x can be so large that, in a way, x and 10x are "almost equal" (for arithmetic of large numbers see also below).

If the superscript of the upward arrow is large, the various representations for large numbers can be applied to this superscript itself. If this superscript is not exactly given then there is no point in raising the operator to a particular power or to adjust the value on which it acts. We can simply use a standard value at the right, say 10, and the expression reduces to 10 \uparrow^n 10=(10 \to 10 \to n) with an approximate n. For such numbers the advantage of using the upward arrow notation no longer applies, and we can also use the chain notation.

The above can be applied recursively for this n, so we get the notation \uparrow^n in the superscript of the first arrow, etc., or we have a nested chain notation, e.g.:

(10 → 10 → (10 → 10 → 3 \times 10^5) ) = 10 \uparrow ^{10 \uparrow ^{3 \times 10^5} 10} 10 \!

If the number of levels gets too large to be convenient, a notation is used where this number of levels is written down as a number (like using the superscript of the arrow instead of writing many arrows). Introducing a function f(n)=10 \uparrow^{n} 10 = (10 → 10 → n), these levels become functional powers of f, allowing us to write a number in the form fm(n) where m is given exactly and n is an integer which may or may not be given exactly (for the example: f^2(3 \times 10^5). If n is large we can use any of the above for expressing it. The "roundest" of these numbers are those of the form fm(1) = (10→10→m→2). For example, (10 \to 10 \to 3\to 2) = 10 \uparrow ^{10 \uparrow ^{10^{10}} 10} 10 \!

Compare the definition of Graham's number: it uses numbers 3 instead of 10 and has 64 arrow levels and the number 4 at the top; thus  G < 3\rightarrow 3\rightarrow 65\rightarrow 2 <(10 \to 10 \to 65\to 2)=f^{65}(1), but also G < f64(4) < f65(1).

If m in fm(n) is too large to give exactly we can use a fixed n, e.g. n = 1, and apply the above recursively to m, i.e., the number of levels of upward arrows is itself represented in the superscripted upward-arrow notation, etc. Using the functional power notation of f this gives multiple levels of f. Introducing a function g(n) = fn(1) these levels become functional powers of g, allowing us to write a number in the form gm(n) where m is given exactly and n is an integer which may or may not be given exactly. We have (10→10→m→3) = gm(1). If n is large we can use any of the above for expressing it. Similarly we can introduce a function h, etc. If we need many such functions we can better number them instead of using a new letter every time, e.g. as a subscript, so we get numbers of the form f_k^m(n) where k and m are given exactly and n is an integer which may or may not be given exactly. Using k=1 for the f above, k=2 for g, etc., we have (10→10→nk) = f_k(n)=f_{k-1}^n(1). If n is large we can use any of the above for expressing it. Thus we get a nesting of forms {f_k}^{m_k} where going inward the k decreases, and with as inner argument a sequence of powers (10 \uparrow^n)^{p_n} with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.

When k is too large to be given exactly, the number concerned can be expressed as fn(10)=(10→10→10→n) with an approximate n. Note that the process of going from the sequence 10n=(10→n) to the sequence 10 \uparrow^n 10=(10→10→n) is very similar to going from the latter to the sequence fn(10)=(10→10→10→n): it is the general process of adding an element 10 to the chain in the chain notation; this process can be repeated again (see also the previous section). Numbering the subsequent versions of this function a number can be described using functions {f_{qk}}^{m_{qk}}, nested in lexicographical order with q the most significant number, but with decreasing order for q and for k; as inner argument we have a sequence of powers (10 \uparrow^n)^{p_n} with decreasing values of n (where all these numbers are exactly given integers) with at the end a number in ordinary scientific notation.

For a number too large to write down in the Conway chained arrow notation we can describe how large it is by the length of that chain, for example only using elements 10 in the chain; in other words, we specify its position in the sequence 10, 10→10, 10→10→10, .. If even the position in the sequence is a large number we can apply the same techniques again for that.

Examples of numbers in numerical order

Numbers expressible in decimal notation:

  • 22 = 4
  • 222 = 2 ↑↑ 3 = 16
  • 33 = 27
  • 44 = 256
  • 55 = 3125
  • 66 = 46,656
  • 2^{2^{2^{2}}} = 2 ↑↑ 4 = 2↑↑↑3 = 65,536
  • 77 = 823,543
  • 106 = 1,000,000 = 1 million
  • 88 = 16,777,216
  • 99 = 387,420,489
  • 109 = 1,000,000,000 = 1 billion
  • 1010 = 10,000,000,000
  • 1012 = 1,000,000,000,000 = 1 trillion
  • 333 = 3 ↑↑ 3 = 7,625,597,484,987 ≈ 7.63 × 1012
  • 1015 = 1,000,000,000,000,000 = 1 million billion = 1 peta

Numbers expressible in scientific notation:

  • googol = 10100
  • 444 = 4 ↑↑ 3 ≈ 1.34 × 10154 ≈ (10 ↑)2 2.2
  • Approximate number of Planck volumes comprising the volume of the observable universe = 8.5 × 10184
  • 555 = 5 ↑↑ 3 ≈ 1.91 × 102184 ≈ (10 ↑)2 3.3
  • 2^{2^{2^{2^2}}} = 2 \uparrow \uparrow 5 = 2^{65,536} \approx 2.0 \times 10^{19,729} \approx (10 \uparrow)^2 4.3
  • 666 = 6 ↑↑ 3 ≈ 2.66 × 1036,305 ≈ (10 ↑)2 4.6
  • 777 = 7 ↑↑ 3 ≈ 3.76 × 10695,974 ≈ (10 ↑)2 5.8
  • M_{43,112,609} \approx 3.16 \times 10^{12,978,188} \approx 10^{10^{7.1}} = (10 \uparrow)^2 7.1, the 47th and as of April 2010 the largest known Mersenne prime.
  • 888 = 8 ↑↑ 3 ≈ 6.01 × 1015,151,335 ≈ (10 ↑)2 7.2
  • 999 = 9 ↑↑ 3 ≈ 4.28 × 10369,693,099 ≈ (10 ↑)2 8.6
  • 101010 =10 ↑↑ 3 = 1010,000,000,000 = (10 ↑)3 1
  • 3^{3^{3^{3}}} = 3 \uparrow \uparrow 4 \approx 1.26 \times 10^{33,638,334,640,024} \approx (10 \uparrow)^3 1.10

Numbers expressible in (10 ↑)n k notation:

  • googolplex = 10^{10^{100}} = (10 \uparrow)^3 2
  • 2^{2^{2^{2^{2^2}}}} = 2 \uparrow \uparrow 6 = 2^{2^{65,536}} \approx 2^{(10 \uparrow)^2 4.3} \approx 10^{(10 \uparrow)^2 4.3} = (10 \uparrow)^3 4.3
  • 10^{10^{10^{10}}}=10 \uparrow \uparrow 4=(10 \uparrow)^4 1
  • 3^{3^{3^{3^3}}} = 3 \uparrow \uparrow 5 \approx 3^{10^{3.6 \times 10^{12}}} \approx (10 \uparrow)^4 1.10
  • 2^{2^{2^{2^{2^{2^2}}}}} = 2 \uparrow \uparrow 7 = \approx (10 \uparrow)^4 4.3
  • 10 ↑↑ 5 = (10 ↑)5 1
  • 3 ↑↑ 6 ≈ (10 ↑)5 1.10
  • 2 ↑↑ 8 ≈ (10 ↑)5 4.3
  • 10 ↑↑ 6 = (10 ↑)6 1
  • 10 ↑↑↑ 2 = 10 ↑↑ 10 = (10 ↑)10 1
  • 2 ↑↑↑↑ 3 = 2 ↑↑↑ 4 = 2 ↑↑ 65,536 ≈ (10 ↑)65,533 4.3 is between 10 ↑↑ 65,533 and 10 ↑↑ 65,534

Bigger numbers:

  • 3 ↑↑↑ 3 = 3 ↑↑ (3 ↑↑ 3) ≈ 3 ↑↑ 7.6 × 1012 ≈ 10 ↑↑ 7.6 × 1012 is between (10 ↑↑)2 2 and (10 ↑↑)2 3
  • 10\uparrow\uparrow\uparrow 3=(10 \uparrow \uparrow)^3 1 = ( 10 → 3 → 3 )
  • (10\uparrow\uparrow)^2 11
  • (10\uparrow\uparrow)^2 10^{\,\!10^{10^{3.81\times 10^{17}}}}
  • 10\uparrow\uparrow\uparrow 4=(10 \uparrow \uparrow)^4 1 = ( 10 → 4 → 3 )
  • (10\uparrow\uparrow)^{2} (10\uparrow)^{497}(9.73\times 10^{32})
  • 10\uparrow\uparrow\uparrow 5=(10 \uparrow \uparrow)^5 1 = ( 10 → 5 → 3 )
  • 10\uparrow\uparrow\uparrow 6=(10 \uparrow \uparrow)^6 1 = ( 10 → 6 → 3 )
  • 10\uparrow\uparrow\uparrow 7=(10 \uparrow \uparrow)^7 1 = ( 10 → 7 → 3 )
  • 10\uparrow\uparrow\uparrow 8=(10 \uparrow \uparrow)^8 1 = ( 10 → 8 → 3 )
  • 10\uparrow\uparrow\uparrow 9=(10 \uparrow \uparrow)^9 1 = ( 10 → 9 → 3 )
  • 10 \uparrow \uparrow \uparrow \uparrow 2 = 10\uparrow\uparrow\uparrow 10=(10 \uparrow \uparrow)^10 1 = ( 10 → 2 → 4 ) = ( 10 → 10 → 3 )
  • The first term in the definition of Graham's number, g1 = 3 ↑↑↑↑ 3 = 3 ↑↑↑ (3 ↑↑↑ 3) ≈ 3 ↑↑↑ (10 ↑↑ 7.6 × 1012) ≈ 10 ↑↑↑ (10 ↑↑ 7.6 × 1012) is between (10 ↑↑↑)2 2 and (10 ↑↑↑)2 3 (See Graham's number#Magnitude of Graham's number)
  • 10\uparrow\uparrow\uparrow\uparrow 3=(10 \uparrow \uparrow\uparrow)^3 1 = (10 → 3 → 4)
  • 4 \uparrow \uparrow \uparrow \uparrow 4 = ( 4 → 4 → 4 ) \approx (10 \uparrow \uparrow \uparrow)^2 (10 \uparrow \uparrow)^3 154
  • 10\uparrow\uparrow\uparrow\uparrow 4=(10 \uparrow \uparrow\uparrow)^4 1 = ( 10 → 4 → 4 )
  • 10\uparrow\uparrow\uparrow\uparrow 5=(10 \uparrow \uparrow\uparrow)^5 1 = ( 10 → 5 → 4 )
  • 10\uparrow\uparrow\uparrow\uparrow 6=(10 \uparrow \uparrow\uparrow)^6 1 = ( 10 → 6 → 4 )
  • 10\uparrow\uparrow\uparrow\uparrow 7=(10 \uparrow \uparrow\uparrow)^7 1= = ( 10 → 7 → 4 )
  • 10\uparrow\uparrow\uparrow\uparrow 8=(10 \uparrow \uparrow\uparrow)^8 1= = ( 10 → 8 → 4 )
  • 10\uparrow\uparrow\uparrow\uparrow 9=(10 \uparrow \uparrow\uparrow)^9 1= = ( 10 → 9 → 4 )
  • 10 \uparrow \uparrow \uparrow \uparrow \uparrow 2 = 10\uparrow\uparrow\uparrow\uparrow 10=(10 \uparrow \uparrow\uparrow)^{10} 1 = ( 10 → 2 → 5 ) = ( 10 → 10 → 4 )
  • ( 2 → 3 → 2 → 2 ) = ( 2 → 3 → 8 )
  • ( 3 → 2 → 2 → 2 ) = ( 3 → 2 → 9 ) = ( 3 → 3 → 8 )
  • ( 10 → 10 → 10 ) = ( 10 → 2 → 11 )
  • ( 10 → 2 → 2 → 2 ) = ( 10 → 2 → 100 )
  • ( 10 → 10 → 2 → 2 ) = ( 10 → 2 → 1010 ) = 10 \uparrow ^{10^{10}} 10 \!
  • The second term in the definition of Graham's number, g2 = 3 ↑g1 3 > 10 ↑g1 - 1 10.
  • ( 10 → 10 → 3 → 2 ) = (10 → 10 → (10 → 10 → 1010) ) = 10 \uparrow ^{10 \uparrow ^{10^{10}} 10} 10 \!
  • g3 = (3 → 3 → g2) > (10 → 10 → g2 - 1) > (10 → 10 → 3 → 2)
  • g4 = (3 → 3 → g3) > (10 → 10 → g3 - 1) > (10 → 10 → 4 → 2)
  • ...
  • g9 = (3 → 3 → g8) is between (10 → 10 → 9 → 2) and (10 → 10 → 10 → 2)
  • ( 10 → 10 → 10 → 2 )
  • g10 = (3 → 3 → g9) is between (10 → 10 → 10 → 2) and (10 → 10 → 11 → 2)
  • ...
  • g63 = (3 → 3 → g62) is between (10 → 10 → 63 → 2) and (10 → 10 → 64 → 2)
  • ( 10 → 10 → 64 → 2 )
  • Graham's number, g64[4]
  • ( 10 → 10 → 65 → 2 )
  • ( 10 → 10 → 10 → 3 )
  • ( 10 → 10 → 10 → 4 )

Comparison of base values

The following illustrates the effect of a base different from 10, base 100. It also illustrates representations of numbers, and the arithmetic.

10012 = 1024, with base 10 the exponent is doubled.

100^{100^{12}}=10^{2*10^{24}}, ditto.

100^{100^{100^{12}}}=10^{10^{2*10^{24}+0.3}}, the highest exponent is very little more than doubled.

  • 100\uparrow\uparrow 2=10^ {200}
  • 100\uparrow\uparrow 3=10^ {2 \times 10^ {200}}
  • 100\uparrow\uparrow 4=(10\uparrow)^2 (2 \times 10^ {200}+0.3)=(10\uparrow)^2 (2\times 10^ {200})=(10\uparrow)^3 200.3=(10\uparrow)^4 2.3
  • 100\uparrow\uparrow n=(10\uparrow)^{n-2} (2 \times 10^ {200})=(10\uparrow)^{n-1} 200.3=(10\uparrow)^{n}2.3<10\uparrow\uparrow (n+1) (thus if n is large it seems fair to say that 100\uparrow\uparrow n is "approximately equal to" 10\uparrow\uparrow n)
  • 100\uparrow\uparrow\uparrow 2=(10\uparrow)^{98} (2 \times 10^ {200})=(10\uparrow)^{100} 2.3
  • 100\uparrow\uparrow\uparrow 3=10\uparrow\uparrow(10\uparrow)^{98} (2 \times 10^ {200})=10\uparrow\uparrow(10\uparrow)^{100} 2.3
  • 100\uparrow\uparrow\uparrow n=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{98} (2 \times 10^ {200})=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{100} 2.3<10\uparrow\uparrow\uparrow (n+1) (compare 10\uparrow\uparrow\uparrow n=(10\uparrow\uparrow)^{n-2}(10\uparrow)^{10}1<10\uparrow\uparrow\uparrow (n+1); thus if n is large it seems fair to say that 100\uparrow\uparrow\uparrow n is "approximately equal to" 10\uparrow\uparrow\uparrow n)
  • 100\uparrow\uparrow\uparrow\uparrow 2=(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3 (compare 10\uparrow\uparrow\uparrow\uparrow 2=(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1)
  • 100\uparrow\uparrow\uparrow\uparrow 3=10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3 (compare 10\uparrow\uparrow\uparrow\uparrow 3=10\uparrow\uparrow\uparrow(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1)
  • 100\uparrow\uparrow\uparrow\uparrow n=(10\uparrow\uparrow\uparrow)^{n-2}(10\uparrow\uparrow)^{98}(10\uparrow)^{100} 2.3 (compare 10\uparrow\uparrow\uparrow\uparrow n=(10\uparrow\uparrow\uparrow)^{n-2}(10\uparrow\uparrow)^{8}(10\uparrow)^{10}1; if n is large this is "approximately" equal)

Accuracy

Note that for a number 10n, one unit change in n changes the result by a factor 10. In a number like 10^{\,\!6.2 \times 10^3}, with the 6.2 the result of proper rounding using significant figures, the true value of the exponent may be 50 less or 50 more. Hence the result may be a factor 1050 too large or too small. This seems like extremely poor accuracy, but for such a large number it may be considered fair (a large error in a large number may be "relatively small" and therefore acceptable).

Accuracy for very large numbers

With extremely large numbers, the relative error may be large, yet there may still be a sense in which we want to consider the numbers as "close in magnitude". For example, consider

1010 and 109

The relative error is

1 - \frac{10^9}{10^{10}} = 1 - \frac{1}{10} = 90\%

a large relative error. However, we can also consider the relative error in the logarithms; in this case, the logarithms (to base 10) are 10 and 9, so the relative error in the logarithms is only 10%.

The point is that exponential functions magnify relative errors greatly – if a and b have a small relative error,

10a and 10b

the relative error is larger, and

10^{10^a} and 10^{10^b}

will have even larger relative error. The question then becomes: on which level of iterated logarithms do we wish to compare two numbers? There is a sense in which we may want to consider

10^{10^{10}} and 10^{10^9}

to be "close in magnitude". The relative error between these two numbers is large, and the relative error between their logarithms is still large; however, the relative error in their second-iterated logarithms is small:

\log_{10}(\log_{10}(10^{10^{10}})) = 10 and \log_{10}(\log_{10}(10^{10^9})) = 9

Such comparisons of iterated logarithms are common, e.g., in analytic number theory.

Approximate arithmetic for very large numbers

There are some general rules relating to the usual arithmetic operations performed on very large numbers:

  • The sum and the product of two very large numbers are both "approximately" equal to the larger one.
  • (10^a)^{\,\!10^b}=10^{a 10^b}=10^{10^{b+\log _{10} a}}

Hence:

  • A very large number raised to a very large power is "approximately" equal to the larger of the following two values: the first value and 10 to the power the second. For example, for very large n we have n^n\approx 10^n (see e.g. the computation of mega) and also 2^n\approx 10^n. Thus 2\uparrow\uparrow 65536 > 10\uparrow\uparrow 65533, see table.

Large numbers in some noncomputable sequences

The busy beaver function Σ is an example of a function which grows faster than any computable function. Its value for even relatively small input is huge. The values of Σ(n) for n = 1, 2, 3, 4 are 1, 4, 6, 13 (sequence A028444 in OEIS). Σ(5) is not known but is definitely ≥ 4098. Σ(6) is at least 3.5×1018267.

Some of the work by Harvey Friedman also involve sequences that grow faster than any computable function.[5]

Infinite numbers

Although all these numbers above are very large, they are all still finite. Certain fields of mathematics define infinite and transfinite numbers. For example, aleph-null is the cardinality of the infinite set of natural numbers, and aleph-one is the next greatest cardinal number. \mathfrak{c} is the cardinality of the reals. The proposition that \mathfrak{c} = \aleph_1 is known as the continuum hypothesis.

Notations

Some notations for extremely large numbers:

These notations are essentially functions of integer variables, which increase very rapidly with those integers. Ever faster increasing functions can easily be constructed recursively by applying these functions with large integers as argument.

Note that a function with a vertical asymptote is not helpful in defining a very large number, although the function increases very rapidly: one has to define an argument very close to the asymptote, i.e. use a very small number, and constructing that is equivalent to constructing a very large number, e.g. the reciprocal.

See also

Notes and references

  1. ^ Information Loss in Black Holes and/or Conscious Beings?, Don N. Page, Heat Kernel Techniques and Quantum Gravity (1995), S. A. Fulling (ed), p. 461. Discourses in Mathematics and its Applications, No. 4, Texas A&M University Department of Mathematics. arXiv:hep-th/9411193. ISBN 0963072838.
  2. ^ How to Get A Googolplex
  3. ^ Lloyd, Seth (2002). "Computational capacity of the universe". Phys. Rev. Lett. 88 (23): 237901. arXiv:quant-ph/0110141. Bibcode 2002PhRvL..88w7901L. doi:10.1103/PhysRevLett.88.237901. PMID 12059399. 
  4. ^ Regarding the comparison with the previous value: 10\uparrow ^n 10 < 3 \uparrow ^{n+1} 3, so starting the 64 steps with 1 instead of 4 more than compensates for replacing the numbers 3 by 10
  5. ^ http://www.math.ohio-state.edu/%7Efriedman/pdf/EnormousInt.12pt.6_1_00.pdf

Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Names of large numbers — This article lists and discusses the usage and derivation of names of large numbers, together with their possible extensions. The following table lists those names of large numbers which are found in many English dictionaries and thus have a… …   Wikipedia

  • Dirac large numbers hypothesis — Paul Dirac The Dirac large numbers hypothesis (LNH) is an observation made by Paul Dirac in 1937 relating ratios of size scales in the Universe to that of force scales. The ratios constitute very large, dimensionless numbers: some 40 orders of… …   Wikipedia

  • History of large numbers — Different cultures used different traditional numeral systems for naming large numbers. The extent of large numbers used varied in each culture.One interesting point in using large numbers is the confusion on the term billion and milliard in many …   Wikipedia

  • Law of large numbers — The law of large numbers (LLN) is a theorem in probability that describes the long term stability of the mean of a random variable. Given a random variable with a finite expected value, if its values are repeatedly sampled, as the number of these …   Wikipedia

  • Law of Truly Large Numbers — The Law of Truly Large Numbers, attributed to Persi Diaconis and Frederick Mosteller, states that with a sample size large enough, any outrageous thing is likely to happen. It seeks to debunk one element of supposed supernatural… …   Wikipedia

  • law of large numbers — Math. the theorem in probability theory that the number of successes increases as the number of experiments increases and approximates the probability times the number of experiments for a large number of experiments. [1935 40] * * * ▪ statistics …   Universalium

  • Law Of Large Numbers — In statistical terms, a rule that assumes that as the number of samples increases, the average of these samples is likely to reach the mean of the whole population. When relating this concept to finance, it suggests that as a company grows, its… …   Investment dictionary

  • Borel's law of large numbers — Roughly speaking, Borel s law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event occurs… …   Wikipedia

  • law of large numbers — noun (statistics) law stating that a large number of items taken at random from a population will (on the average) have the population statistics • Syn: ↑Bernoulli s law • Topics: ↑statistics • Hypernyms: ↑law, ↑ …   Useful english dictionary

  • lawof large numbers — law of large numbers n. Statistics The rule or theorem that the average of a large number of independent measurements of a random quantity tends toward the theoretical average of that quantity. Also called Bernoulli s law. * * * …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”