Decision tree model

Decision tree model

In computational complexity and communication complexity theories the decision tree model is the model of computation or communication in which an algorithm or communication process is considered to be basically a decision tree, i.e., a sequence of branching operations based on comparisons of some quantities, the comparisons being assigned the unit computational cost.

The branching operations are called "tests" or "queries". In this setting the algorithm in question may be viewed as a computation of a Boolean function f: \{0,1\}^n \rightarrow \{0,1\} where the input is a series of queries and the output is the final decision. Every query is dependent on previous queries, therefore it is described as a binary tree.

Several variants of decision tree models may be considered, depending on the complexity of the operations allowed in the computation of a single comparison and the way of branching.

Decision trees models are instrumental in establishing lower bounds for computational complexity for certain classes of computational problems and algorithms: the lower bound for worst-case computational complexity is proportional to the largest depth among the decision trees for all possible inputs for a given computational problem. The computation complexity of a problem or an algorithm expressed in terms of the decision tree model is called decision tree complexity or query complexity.

Contents

Classification by query computational complexity

Simple decision tree

The model in which every decision is based on the comparison of two numbers within constant time is called simply a decision tree model. It was introduced to establish computational complexity of sorting and searching.[1]

The simplest illustration of this lower bound technique is for the problem of finding the smallest number among n numbers using only comparisons. In this case the decision tree model is a binary tree. Algorithms for this searching problem may result in n different outcomes (since any of the n given numbers may turn out to be the smallest one). It is known that the depth of a binary tree with n leaves is at least log n, which gives a lower bound of Ω(log n) for the searching problem. However this lower bound is known to be slack, since the following simple argument shows that at least n - 1 comparisons are needed: Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison.

Along the same lines the lower bound of Ω(nlog n) for sorting may be proved. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as mergesort and heapsort, demonstrates that the bound is tight.

Linear decision tree

Algebraic decision tree

Classification by query computational model

Deterministic decision tree

If the output of a decision tree is f(x), for all x\in \{0,1\}^n , the decision tree is said to "compute" f. The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained. D(f), the deterministic decision tree complexity of f is the smallest depth among all deterministic decision trees that compute f.

Randomized decision tree

One way to define a randomized decision tree is to add additional nodes to the tree, each controlled by a probability pi. Another equivalent definition is to select a whole decision tree at the beginning from a set of decision trees based on a probability distribution. Based on this second definition, the complexity of the randomized tree is defined as the greatest depth among all the trees associated with probabilities greater than 0. R2(f) is defined as the complexity of the lowest-depth randomized decision tree whose result is f(x) with probability at least 2 / 3 for all x\in \{0,1\}^n (i.e., with bounded 2-sided error).

R2(f) is known as the Monte Carlo randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. The Las Vegas decision-tree complexity R0(f) measures the expected depth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version known as R1(f).

Nondeterministic decision tree

The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function. It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty.

Quantum decision tree

The quantum decision tree complexity Q2(f) is the depth of the lowest-depth quantum decision tree that gives the result f(x) with probability at least 2 / 3 for all x\in \{0,1\}^n . Another quantity, QE(f), is defined as the depth of the lowest-depth quantum decision tree that gives the result f(x) with probability 1 in all cases (i.e. computes f exactly). Q2(f) and QE(f) are more commonly known as quantum query complexities, because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we define Q0(f) and Q1(f).

Relationship between different models

It follows immediately from the definitions that for all n-bit Boolean functions f, Q_2(f) \leq R_2(f) \leq R_1(f) \leq R_0(f) \leq D(f) \leq n, and Q_2(f) \leq Q_E(f) \leq D(f) \leq n.

Blum and Impagliazzo[2], Hartmanis and Hemachandra[3], and Tardos[4] independently discovered that D(f) \leq R_0(f)^2. Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity: D(f) = O(R2(f)3).[5] (Nisan also showed that D(f) = O(R1(f)2).) A corollary of this result is that R0(f) = O(R2(f)3). This inequality may be loose, however; no example is known of even a super-linear separation between R0(f) and R2(f).[6]

The quantum decision tree complexity Q2(f) is also polynomially related to D(f). Midrijanis showed that D(f) = O(QE(f)3)[7][8], improving a quartic bound due to Beals et al.[9] Beals et al. also showed that D(f) = O(Q2(f)6), and this is still the best known bound. However, the largest known gap between deterministic and quantum query complexities is only quadratic. A quadratic gap is achieved for the OR function; D(ORn) = n while Q_2(OR_n) = \Theta(\sqrt{n}).

It is important to note that these polynomial relationships are valid only for total Boolean functions. For partial Boolean functions, that have a domain a subset of {0,1}n, an exponential separation between QE(f) and D(f) is possible; the first example of such a problem was discovered by Deutsch and Jozsa. The same example also gives an exponential separation between R2(f) and D(f).

These relationships can be summarized by the following inequalities, which are true up to constant factors:[8]

References

  1. ^ "Data structures and algorithms, by Alfred V. Aho, John E. Hopcroft, Jeffrey D. Ullman
  2. ^ a b Blum, M.; Impagliazzo, R. (1987). "Generic oracles and oracle classes". Proceedings of 18th IEEE FOCS. pp. 118–126. 
  3. ^ a b Hartmanis, J.; Hemachandra, L. (1987), "One-way functions, robustness, and non-isomorphism of NP-complete sets", Technical Report DCS TR86-796, Cornell University 
  4. ^ a b Tardos, G. (1989). "Query complexity, or why is it difficult to separate NPA ∩ coNPA from PA by random oracles A?". Combinatorica 9: 385–392. doi:10.1007/BF02125350. 
  5. ^ a b c Nisan, N. (1989). "CREW PRAMs and decision trees". Proceedings of 21st ACM STOC. pp. 327–335. 
  6. ^ Santha, Miklos (1995), On the Monte Carlo Boolean Decision Tree Complexity of Read-Once Formulae, http://www.lri.fr/~santha/Papers/s95.ps.gz  (ps format)
  7. ^ Midrijanis, Gatis (2004), "Exact quantum query complexity for total Boolean functions", arXiv:quant-ph/0403168, arXiv:quant-ph/0403168 
  8. ^ a b c d Midrijanis, Gatis (2005), "On randomized and quantum query complexities", arXiv:quant-ph/0501142, arXiv:quant-ph/0501142 
  9. ^ a b c Beals, R.; Buhrman, H.; Cleve, R.; Mosca, M.; de Wolf, R. (2001). "Exact quantum query complexity for total Boolean functions". Journal of ACM 47: 778–797. 
  10. ^ H. Buhrman, R. Cleve, R. de Wolf, and Ch. Zalka. Bounds for Small-Error and Zero-Error Quantum Algorithms. In 40th IEEE Symposium on Foundations of Computer Science (FOCS'99), pp.358-368. cs.CC/9904019, 1999.
  11. ^ S. Aaronson. Quantum Certificate Complexity. IEEE Conference on Computational Complexity, pp. 171-178, 2003.

Surveys


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Decision tree — This article is about decision trees in decision analysis. For the use of the term in machine learning, see Decision tree learning. A decision tree is a decision support tool that uses a tree like graph or model of decisions and their possible… …   Wikipedia

  • Decision tree learning — This article is about decision trees in machine learning. For the use of the term in decision analysis, see Decision tree. Decision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model… …   Wikipedia

  • decision tree — /dɪ sɪʒ(ə)n tri:/ noun a model for decision making, showing the possible outcomes of different decisions ● This computer programme incorporates a decision tree …   Marketing dictionary in english

  • Decision analysis — (DA) is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner. Decision analysis includes many procedures, methods, and tools for identifying, clearly… …   Wikipedia

  • Tree structure — A tree structure showing the possible hierarchical organization of an encyclopedia …   Wikipedia

  • Decision stump — An example of a decision stump that discriminates between two of three classes of Iris flower data set: Iris versicolor and Iris virginica. This particular stump achieves 94% accuracy on Iris dataset for these two classes. A decision stump is a… …   Wikipedia

  • Model of computation — For computer models simulating complex systems, see Computational model. In model driven engineering, the model of computation explains how the behaviour of the whole system is the result of the behaviour of each of its components. In… …   Wikipedia

  • Model-based testing — is the application of Model based design for designing and optionally executing the necessary artifacts to perform software testing. Models can be used to represent the desired behavior of the System Under Test (SUT), or to represent the desired… …   Wikipedia

  • Model Checking — (deutsch auch Modellprüfung) ist ein Verfahren zur vollautomatischen Verifikation einer Systembeschreibung (Modell) gegen eine Spezifikation (Formel). Der Begriff ist motiviert durch die mathematische Formulierung des Problems: Für eine gegebene… …   Deutsch Wikipedia

  • Model checking — This article is about checking of models in computer science. For the checking of models in statistics, see regression model validation. In computer science, model checking refers to the following problem: Given a model of a system, test… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”