Memory-level parallelism

Memory-level parallelism

Memory Level Parallelism or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses or translation lookaside buffer misses, at the same time.

In a single processor, MLP may be considered a form of ILP, instruction level parallelism. However, ILP is often mixed up with superscalar, the ability to execute more than one instruction at the same time. E.g. a processor such as the Intel Pentium Pro is five-way superscalar, with the ability to start executing five different microinstructions in a given cycle, but it can handle four different cache misses for up to 20 different load microinstructions at any time.

It is possible to have a machine that is not superscalar but which nevertheless has high MLP.

Arguably a machine that has no ILP, which is not superscalar, which executes one instruction at a time in a non-pipelined manner, but which performs hardware prefetching (not software instruction level prefetching) exhibits MLP (due to multiple prefetches outstanding) but not ILP. This is because there are multiple memory _operations_ outstanding, but not _instructions_. Instructions are often mixed up with operations.

Furthermore, multiprocessor and multithreaded computer systems may be said to exhibit MLP and ILP due to parallelism - but not intra-thread, single process, ILP and MLP. Often, however, we restrict the terms MLP and ILP to refer to extracting such parallelism from what appears to be non-parallel single threaded code.

References

  • "Enhancing memory level parallelism via recovery-free value prediction." H. Zhou and T. M. Conte. Proceedings of the 17th Annual International Conference on Supercomputing, ICS 2003.
  • "A Case for MLP-Aware Cache Replacement", Moinuddin K. Qureshi, Daniel N. Lynch, Onur Mutlu, Yale N. Patt. Proceedings of the 33rd annual International Symposium on Computer Architecture (ISCA), 2006.
  • "MLP-Aware Runahead Threads în a Simultaneous Multithreading Processor". Craeynest, K. Van, S. Eyerman, L. Eeckhout. Proc. of The 4th HiPEAC Int. Conf., Paphos, Cyprus, January 2009.
  • "Microarchitecture optimizations for exploiting memory-level parallelism", Yuan Chou, B. Fahs, and S. Abraham, Computer Architecture, 2004. Proceedings. 31st Annual International Symposium on 2004.
  • "Coming challenges in microarchitecture and architecture", Ronen, R.; Mendelson, A.; Lai, K.; Shih-Lien Lu; Pollack, F.; Shen, J.P. Proceedings of the IEEE Volume: 89 Issue: 3 Mar 2001
  • "MLP yes! ILP no!" (abstract / slides), A. Glew. In Wild and Crazy Ideas Session, 8th International Conference on Architectural Support for Programming Languages and Operating Systems, October 1998.

Wikimedia Foundation. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Memory level parallelism — or MLP is a term in computer architecture referring to the ability to have pending multiple memory operations, in particular cache misses, at the same time.MLP may be considered a form of ILP, instruction level parallelism. However, ILP is often… …   Wikipedia

  • Instruction level parallelism — (ILP) is a measure of how many of the operations in a computer program can be performed simultaneously. Consider the following program: 1. e = a + b 2. f = c + d 3. g = e * fOperation 3 depends on the results of operations 1 and 2, so it cannot… …   Wikipedia

  • Memory architecture — describes the methods used to implement electronic computer data storage in a manner that is a combination of the fastest, most reliable, most durable, and least expensive way to store and retrieve information. Depending on the specific… …   Wikipedia

  • Memory disambiguation — is a set of techniques employed by high performance out of order execution microprocessors that execute memory access instructions (loads and stores) out of program order. The mechanisms for performing memory disambiguation, implemented using… …   Wikipedia

  • Memory management unit — This 68451 MMU could be used with the Motorola 68010 A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware component responsible for handling accesses to memory requested by the CPU. Its… …   Wikipedia

  • Data parallelism — (also known as loop level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. It… …   Wikipedia

  • Distributed shared memory — (DSM), in Computer Architecture is a form of memory architecture where the (physically separate) memories can be addressed as one (logically shared) address space. Here, the term shared does not mean that there is a single centralized memory but… …   Wikipedia

  • Hardware scout — is a technique that uses otherwise idle processor execution resources to perform prefetching during cache misses. When a thread is stalled by a cache miss, the processor pipeline checkpoints the register file, switches to runahead mode, and… …   Wikipedia

  • MLP — may stand for: Malta Labour Party Marschollek, Lautenschläger und Partner, a large German financial services corporation Mary Louise Parker, an American actress Master limited partnership Memory Level Parallelism Meridian Lossless Packing, a… …   Wikipedia

  • Pentium Pro — Infobox Computer Hardware Cpu name = Pentium Pro caption = Pentium Pro with 256 KB L2 Cache produced start = November 1, 1995 slowest = 150 | slow unit = MHz fastest = 200 | fast unit = MHz fsb slowest = 60 | fsb slow unit = fsb fastest = 66 |… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”