Type polymorphism

Type polymorphism

In computer science, polymorphism is a programming language feature that allows values of different data types to be handled using a uniform interface. The concept of parametric polymorphism applies to both data types and functions. A function that can evaluate to or be applied to values of different types is known as a "polymorphic function." A data type that can appear to be of a generalized type (e.g., a list with elements of arbitrary type) is designated "polymorphic data type" like the generalized type from which such specializations are made.

There are two fundamentally different kinds of polymorphism, originally informally described by Christopher Strachey in 1967. If the range of actual types that can be used is finite and the combinations must be specified individually prior to use, it is called Ad-hoc polymorphism. If all code is written without mention of any specific type and thus can be used transparently with any number of new types, it is called parametric polymorphism. John C. Reynolds (and later Jean-Yves Girard) formally developed this notion of polymorphism as an extension to the lambda calculus (called the polymorphic lambda calculus, or System F).

In object-oriented programming, ad-hoc polymorphism is a concept in type theory wherein a name may denote instances of many different classes as long as they are related by some common super class.Booch, et all 2007 "Object-Oriented Analysis and Design with Applications." Addison-Wesley. ] Ad-hoc polymorphism is generally supported through object inheritance, i.e., objects of different types may be treated uniformly as members of a common superclass. Ad-hoc polymorphism is also supported in many languages using function and method overloading.

Parametric polymorphism is widely supported in statically typed functional programming languages. In the object-oriented programming community, programming using parametric polymorphism is often called "generic programming".

Polymorphism in strongly-typed languages

Parametric polymorphism

Parametric polymorphism is a way to make a language more expressive, while still maintaining full static type-safety. Using parametric polymorphism, a function or a data type can be written generically so that it can handle values "identically" without depending on their type.Pierce, B. C. 2002 "Types and Programming Languages." MIT Press. ]

For example, a function append that joins two lists can be constructed so that it does not care about the type of elements: it can append lists of integers, lists of real numbers, lists of strings, and so on. Let the "type variable a" denote the type of elements in the lists. Then append can be typed ["a"] × ["a"] → ["a"] , where ["a"] denotes a list of elements of type "a". We say that the type of append is "parameterized by a" for all values of "a". (Note that since there is only one type variable, the function cannot be applied to just any pair of lists: the pair, as well as the result list, must consist of the same type of elements.) For each place where append is applied, a value is decided for "a".

Parametric polymorphism was first introduced to programming languages in ML in 1976. Today it exists in Standard ML, OCaml, Haskell, Visual Prolog and others. Java and C# have both recently introduced "generics" for parametric polymorphism.

The most general form of polymorphism is "higher-rank impredicative polymorphism". Two popular restrictions of this form are restricted rank polymorphism (for example, rank-1 or "prenex" polymorphism) and predicative polymorphism. Together, these restrictions give "predicative prenex polymorphism", which is essentially the form of polymorphism found in ML and early versions of Haskell.

Rank restrictions

Rank-1 (prenex) polymorphism

In a "prenex polymorphic" system, type variables may not be instantiated with polymorphic types. This is very similar to what is called "ML-style" or "Let-polymorphism" (technically ML's Let-polymorphism has a few other syntactic restrictions).

This restriction makes the distinction between polymorphic and non-polymorphic types very important; thus in predicative systems polymorphic types are sometimes referred to as "type schemas" to distinguish them from ordinary (monomorphic) types, which are sometimes called "monotypes". A consequence is that all types can be written in a form which places all quantifiers at the outermost (prenex) position.

For example, consider the append function described above, which has type ["a"] × ["a"] → ["a"] ; in order to apply this function to a pair of lists, a type must be substituted for the variable "a" in the type of the function such that the type of the arguments matches up with the resulting function type. In an "impredicative" system, the type being substituted may be any type whatsoever, including a type that is itself polymorphic; thus append can be applied to pairs of lists with elements of any type -- even to lists of polymorphic functions such as append itself.

Polymorphism in the language ML and its close relatives is predicative. This is because predicativity, together with other restrictions, makes the type system simple enough that type inference is possible. In languages where explicit type annotations are necessary when applying a polymorphic function, the predicativity restriction is less important; thus these languages are generally impredicative. Haskell manages to achieve type inference without predicativity but with a few complications.

Rank-"k" polymorphism

For some fixed value "k", rank-"k" polymorphism is a system in which a quantifier may not appear to the left of more than "k" arrows (when the type is drawn as a tree).

Type reconstruction for rank-2 polymorphism is decidable, but reconstruction for rank-3 and above is not.

Rank-"n" ("higher-rank") polymorphism

Rank-"n" polymorphism is polymorphism in which quantifiers may appear to the left of arbitrarily many arrows.

Predicativity restrictions

Predicative polymorphism

In a predicative parametric polymorphic system, a type au containing a type variable alpha may not be used in such a way that alpha is instantiated to a polymorphic type.

Impredicative polymorphism ("first class" polymorphism)

Also called first-class polymorphism. Impredicative polymorphism allows the instantiation of a variable in a type au with any type, including polymorphic types, such as au itself.

In type theory, the most frequently studied impredicative typed λ-calculi are based on those of the lambda cube, especially System F. Predicative type theories include Martin-Löf Type Theory and NuPRL.

Bounded parametric polymorphism

Cardelli and Wegner recognized in 1985 the advantages of allowing "bounds" on the type parameters. Many operations require some knowledge of the data types but can otherwise work parametrically. For example, to check whether an item is included in a list, we need to compare the items for equality. In Standard ML, type parameters of the form "’’a" are restricted so that the equality operation is available, thus the function would have the type "’’a" × "’’a" list → bool and "’’a" can only be a type with defined equality. In Haskell, bounding is achieved by requiring types to belong to a type class; thus the same function has the type {scriptstyle Eq , alpha , Rightarrow alpha , ightarrow left [alpha ight] ightarrow Bool} in Haskell. In most object-oriented programming languages that support parametric polymorphism, parameters can be constrained to be subtypes of a given type (see #Subtyping polymorphism below and the article on Generic programming).

Subtyping polymorphism (or inclusion polymorphism)

Some languages employ the idea of "subtypes" to restrict the range of types that can be used in a particular case of parametric polymorphism. In these languages, subtyping polymorphism (sometimes referred to as dynamic polymorphism) allows a function to be written to take an object of a certain type "T", but also work correctly if passed an object that belongs to a type "S" that is a subtype of "T" (according to the Liskov substitution principle). This type relation is sometimes written "S" <: "T". Conversely, "T" is said to be a "supertype" of "S"—written "T" :> "S".

For example, if Number, Rational, and Integer are types such that Number :> Rational and Number :> Integer, a function written to take a Number will work equally well when passed an Integer or Rational as when passed a Number. The actual type of the object can be hidden from clients into a black box, and accessed via object identity.In fact, if the Number type is "abstract", it may not even be possible to get your hands on an object whose "most-derived" type is Number (see abstract data type, abstract class). This particular kind of type hierarchy is known—especially in the context of the Scheme programming language—as a "numerical tower", and usually contains a lot more types.

Object-oriented programming languages offer subtyping polymorphism using "subclassing" (also known as "inheritance"). In typical implementations, each class contains what is called a "virtual table"—a table of functions that implement the polymorphic part of the class interface—and each object contains a pointer to the "vtable" of its class, which is then consulted whenever a polymorphic method is called. This mechanism is an example of:
* "late binding", because virtual function calls are not bound until the time of invocation, and
* "single dispatch" (i.e., single-argument polymorphism), because virtual function calls are bound simply by looking through the vtable provided by the first argument (the this object), so the runtime types of the other arguments are completely irrelevant.The same goes for most other popular object systems. Some, however, such as CLOS, provide "multiple dispatch", under which method calls are polymorphic in "all" arguments.

Ad-hoc polymorphism

Strachey [C. Strachey, Fundamental concepts in programming languages. Lecture notes for International Summer School in Computer Programming, Copenhagen, August 1967] chose the term ad-hoc polymorphism to refer to polymorphic functions which can be applied to arguments of different types, but which behave differently depending on the type of the argument to which they are applied (also known as function overloading). The term "ad hoc" in this context is not intended to be pejorative; it refers simply to the fact that this type of polymorphism is not a fundamental feature of the type system.

Ad-hoc polymorphism is a dispatch mechanism: code moving through one named function is dispatched to various other functions without having to specify the exact function being called. Overloading allows multiple functions taking different types to be defined with the same name; the compiler or interpreter automatically calls the right one. This way, functions appending lists of integers, lists of strings, lists of real numbers, and so on could be written, and all be called "append"—and the right "append" function would be called based on the type of lists being appended. This differs from parametric polymorphism, in which the function would need to be written "generically", to work with any kind of list. Using overloading, it is possible to have a function perform two completely different things based on the type of input passed to it; this is not possible with parametric polymorphism. Another way to look at overloading is that a routine is uniquely identified not by its name, but by the combination of its name and the number, order and types of its parameters.

This type of polymorphism is common in object-oriented programming languages, many of which allow operators to be overloaded in a manner similar to functions (see operator overloading). Some languages which are not dynamically typed and lack ad-hoc polymorphism (including type classes) have longer function names such as print_int, print_string, etc. This can be seen as advantage (more descriptive) or a disadvantage (more long-winded) depending on one's point of view.

An advantage that is sometimes gained from overloading is the appearance of specialization, e.g., a function with the same name can be implemented in multiple different ways, each optimized for the particular data types that it operates on. This can provide a convenient interface for code that needs to be specialized to multiple situations for performance reasons.

Since overloading is done at compile time, it is not a substitute for late binding as found in subtyping polymorphism.

Example

This example aims to illustrate three different kinds of polymorphism described in this article. Though overloading an originally arithmetic operator to do a wide variety of things in this way may not be the most clear-cut example, it allows some subtle points to be made. In practice, the different types of polymorphism are not generally mixed up as much as they are here.

Imagine an operator + that may be used in the following ways:
# 1 + 2 = 3
# 3.14 + 0.0015 = 3.1415
# 1 + 3.7 = 4.7
# [1, 2, 3] + [4, 5, 6] = [1, 2, 3, 4, 5, 6]
# [true, false] + [false, true] = [true, false, false, true]
# "foo" + "bar" = "foobar"

Overloading

To handle these six function calls, four different pieces of code are needed—or "three", if strings are considered to be lists of characters:
* In the first case, integer addition must be invoked.
* In the second and third cases, floating-point addition must be invoked (with type promotion, or type coercion).
* In the fourth and fifth cases, list concatenation must be invoked.
* In the last case, string concatenation must be invoked, unless this too is handled as list concatenation (e.g., Haskell).Thus, the name + actually refers to three or four completely different functions. This is an example of "overloading".

Override polymorphism

Override polymorphism is an "override of existing code". Subclasses of existing classes are given a "replacement method" for methods in the superclass. Superclass objects may also use the replacement methods when dealing with objects of the subtype. The replacement method that a subclass provides has exactly the same signature as the original method in the superclass (return type, number and type of parameters etc.)

Java API Example: For Java, every object is a subdivsion of "Object". Java's "Object" class has a method called toString()Sun Microsystems. "Object class reference." http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Object.html#toString() ] , which returns "a string representation of the object" (usually a "reference value" which is useful for debugging reasons Ajit Sagar. "Debugging With the Java Object's toString() Method." http://www.devx.com/tips/Tip/5305 ] .

"Object" is a superclass of "BigDecimal". Thus when implementing "BigDecimal", the author can override the method toString(), so toString() returns more meaningful information: a string representation of the value that is stored in this particular BigDecimal object. One therefore says "BigDecimal.toString() overrides Object.toString()".

Example

Object obj = new Object();System.out.println(obj.toString());

BigDecimal decimal = new java.math.BigDecimal("0.0");System.out.println(decimal.toString());

/*BigDecimal objAsDec = new Object(); // illegalSystem.out.println(objAsDec.toString());
*/

Object decAsObj = new java.math.BigDecimal("1.0");System.out.println(decAsObj.toString());


the output is:
java.lang.Object@86c347
0.0
1.0

As you can see, in the first two cases the toString() method that matches the type or class of the object is called. For obj this is Object's toString method, and for decimal this is BigDecimal's toString() method.

objAsDec would give a compile time error. If the compiler were to actually allow the creation of objAsDec, and it were passed later to a method that calls a BigDecimal class method, there would be a problem: objAsDec is actually an Object, not a BigDecimal.

decAsObj is not illegal, and is a more interesting case. The runtime type of the decAsObj reference is Object, even though the object it points to is a BigDecimal. However, when toString() is called upon it, BigDecimal's toString() method is the one to be called.

Imagine if decAsObj had been put in an Object [] array, then passed to a method which loops over the array calling the toString() on each reference in the array. When it came to decAsObj, the method that would actually be called would not be the same method as a plain Object like obj. This proves that in Java, it cannot be decided which particular method will actually be called at compile time. This is called "dynamic binding".

Parametric polymorphism

Finally, the reason why we can concatenate both lists of integers, lists of booleans, and lists of characters, is that the function for list concatenation was written without any regard to the type of elements stored in the lists. This is an example of "parametric polymorphism". If you wanted to, you could make up a thousand different new types of lists, and the generic list concatenation function would happily and without requiring any augmentation accept instances of them all.

It can be argued, however, that this polymorphism is not really a property of the function "per se"; that if the function is polymorphic, it is due to the fact that the "list data type" is polymorphic. This is true—to an extent, at least—but it is important to note that the function could just as well have been defined to take as a second argument an "element" to append to the list, instead of another list to concatenate to the first. If this were the case, the function would indisputably be parametrically polymorphic, because it could then not know "anything" about its second argument, except that the type of the element should match the type of the elements of the list.

ee also

* Polymorphism in object-oriented programming
* Duck typing for polymorphism without (static) types
* Polymorphic code (Computer virus terminology)
* System F for a lambda calculus with parametric polymorphism.
* Virtual inheritance

References

* Luca Cardelli, Peter Wegner. "On Understanding Types, Data Abstraction, and Polymorphism," from Computing Surveys, (December, 1985) [http://research.microsoft.com/Users/luca/Papers/OnUnderstanding.pdf]
* Philip Wadler, Stephen Blott. "How to make ad-hoc polymorphism less ad hoc," from Proc. 16th ACM Symposium on Principles of Programming Languages, (January, 1989) [http://citeseer.ist.psu.edu/wadler88how.html]
* Christopher Strachey. "Fundamental Concepts in Programming Languages," from Higher-Order and Symbolic Computation, (April, 2000) [http://scholar.google.com/scholar?q=Strachey+%22Fundamental+Concepts+in+Programming+Languages%22]
* Paul Hudak, John Peterson, Joseph Fasel. "A Gentle Introduction to Haskell Version 98". [http://www.haskell.org/tutorial/]
* Booch, et All. "Object-Oriented Analysis and Design with Applications".

External links

* [http://www.cplusplus.com/doc/tutorial/polymorphism.html C++ examples of polymorphism]
* [http://wiki.visual-prolog.com/index.php?title=Objects_and_Polymorphism Objects and Polymorphism (Visual Prolog)]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Type system — Type systems Type safety Inferred vs. Manifest Dynamic vs. Static Strong vs. Weak Nominal vs. Structural Dependent typing Duck typing Latent typing Linear typing Uniqueness typing …   Wikipedia

  • Type inference — Type inference, or implicit typing, refers to the ability to deduce automatically the type of a value in a programming language. It is a feature present in some strongly statically typed languages. It is often characteristic of but not limited to …   Wikipedia

  • Polymorphism (materials science) — Polymorphism in materials science is the ability of a solid material to exist in more than one form or crystal structure. Polymorphism can potentially be found in any crystalline material including polymers, minerals, and metals, and is related… …   Wikipedia

  • Polymorphism — or dimorphism may refer to: Contents 1 Biology 2 Computing 3 Chemistry …   Wikipedia

  • Polymorphism (computer science) — This article is about the programming language theory concepts with direct application to functional programming languages. For a gentler introduction of these notions as commonly implemented in object oriented programming, see Polymorphism in… …   Wikipedia

  • Polymorphism (biology) — Light morph Jaguar (typical) Dark morph or melanistic Jaguar (about …   Wikipedia

  • Polymorphism in object-oriented programming — In simple terms, polymorphism is the ability of one type, A, to appear as and be used like another type, B. In strongly typed languages, this usually means that type A somehow derives from type B, or type A implements an interface that represents …   Wikipedia

  • Type class — In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class T and… …   Wikipedia

  • Polymorphism — A variation in the DNA that is too common to be due merely to new mutation. A polymorphism must have a frequency of at least 1% in the population. Examples of polymorphisms include the genes for sickle cell disease, thalassemia and G6PD… …   Medical dictionary

  • Type safety — In computer science, type safety is a property of some programming languages that is defined differently by different communities, but most definitions involve the use of a type system to prevent certain erroneous or undesirable program behavior… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”