Computer performance

Computer performance

Computer performance is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used.

Depending on the context, good computer performance may involve one or more of the following:

Contents

Performance metrics

Computer performance metrics include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up. CPU benchmarks are available.[1]

Aspect of software quality

Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions.

Technical and non-technical definitions

The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be

- compared relative to other systems or the same system before/after changes
- defined in absolute terms, e.g. for fulfilling a contractual obligation

Whilst the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience:

The word performance in computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"[2]

Technical performance metrics

There are a wide variety of technical performance metrics that indirectly affect overall computer performance.

Because there are too many programs to test a CPU's speed on all of them, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.

Some important measurements include:

  • Instructions per second – Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth).
  • FLOPS – The number of floating-point operations per second is often important in selecting computers for scientific computations.
  • Performance per watt – System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself. [1][2]
  • Some system designers building parallel computers pick CPUs based on the speed per dollar.
  • System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP[disambiguation needed ][clarification needed])
  • Computer programmers who program directly in assembly language want a CPU to support a full-featured instruction set.
  • Low power – For systems with limited power sources (e.g. solar, batteries, human power).
  • Small size or low weight - for portable embedded systems, systems for spacecraft.
  • Environmental impact – Minimizing environmental impact of computers during manufacturing and recycling as well as during use. Reducing waste, reducing hazardous materials. (see Green computing).
  • Giga-updates per second - a measure of how frequently the RAM can be updated

Occasionally a CPU designer can find a way to make a CPU with better overall performance by improving one of these technical performance metrics without sacrificing any other (relevant) technical performance metric—for example, building the CPU out of better, faster transistors. However, sometimes pushing one technical performance metric to an extreme leads to a CPU with worse overall performance, because other important technical performance metrics were sacrificed to get one impressive-looking number—for example, the megahertz myth.

The total amount of time (t) required to execute a particular benchmark program is

t = N * C / f

where

  • N is the number of instructions actually executed (the instruction path length). The code density of the instruction set strongly affects N. The value of N can either be determined exactly by using an instruction set simulator (if available) or by estimation—itself based partly on estimated or actual frequency distribution of input variables and by examining generated machine code from an HLL compiler. It cannot be determined from the number of lines of HLL source code. N is not affected by other processes running on the same processor. The significant point here is that hardware normally does not keep track of (or at least make easily available) a value of N for executed programs. The value can therefore only be accurately determined by instruction set simulation, which is rarely practiced.
  • f is the clock frequency in cycles per second.

Even on one machine, a different compiler or the same compiler with different compiler optimization switches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark?

A CPU designer is often required to implement a particular instruction set, and so cannot change N. Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to a speed-demon CPU design. Sometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such as out-of-order execution, superscalar CPUs, larger caches, caches with improved hit rates, improved branch prediction, speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design.[3]

See also

References

  1. ^ Measuring Program Similarity: Experiments with SPEC CPU Benchmark Suites, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123.501&rep=rep1&type=pdf 
  2. ^ Computer Performance Analysis with Mathematica by Arnold O. Allen, Academic Press, 1994. $1.1 Introduction, pg 1.
  3. ^ "Brainiacs, Speed Demons, and Farewell" by Linley Gwennap 1999

Wikimedia Foundation. 2010.

Игры ⚽ Нужен реферат?

Look at other dictionaries:

  • Performance tuning — is the improvement of system performance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem …   Wikipedia

  • Computer architecture — In computer science and engineering, computer architecture is the practical art of selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals and the formal modelling of those systems.… …   Wikipedia

  • Computer Measurement Group — The Computer Measurement Group (CMG), founded in 1974, is a worldwide non profit organization of data processing professionals whose work involves measuring and managing the performance of computing systems. In this context, performance is… …   Wikipedia

  • Performance (Informatik) — Die Artikel Leistungsbewertung (Computer) und Leistung (Informatik) überschneiden sich thematisch. Hilf mit, die Artikel besser voneinander abzugrenzen oder zu vereinigen. Beteilige dich dazu an der Diskussion über diese Überschneidungen. Bitte… …   Deutsch Wikipedia

  • Performance (disambiguation) — A performance art is an event in which one group of people (the performer or performers) behave in a particular way for another group of people (the audience).Performance may also refer to: * a ritual in a religious or occult setting * the… …   Wikipedia

  • Computer music — is a term that was originally used within academia to describe a field of study relating to the applications of computing technology in music composition; particularly that stemming from the Western art music tradition. It includes the theory and …   Wikipedia

  • Computer network — Computer networks redirects here. For the periodical, see Computer Networks (journal). Datacom redirects here. For other uses, see Datacom (disambiguation). Internet map. The Internet is a global system of interconnected computer networks that… …   Wikipedia

  • computer science — computer scientist. the science that deals with the theory and methods of processing information in digital computers, the design of computer hardware and software, and the applications of computers. [1970 75] * * * Study of computers, their… …   Universalium

  • Computer storage density — is a measure of the quantity of information bits that can be stored on a given length of track, area of surface, or in a given volume of a computer storage medium. Generally, higher density is more desirable, for it allows greater volumes of data …   Wikipedia

  • Computer-aided diagnosis — Computer aided detection (CADe) and computer aided diagnosis (CADx) are procedures in medicine that assist doctors in the interpretation of medical images. Imaging techniques in X ray, MRI, and Ultrasound diagnostics yield a great deal of… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”