Data Center Ethernet

Data Center Ethernet

Data Center Ethernet (also known as Converged Enhanced Ethernet) describes an enhanced Ethernet that will enable convergence of various applications in data centers (LAN, SAN, and HPC) onto a single interconnect technology.

Today data centers deploy different networks based on distinct interconnect technologies to transport different traffic from different applications; for example, storage traffic is transported over Fibre Channel-based SAN or InfiniBand, client-server application traffic is handled by an Ethernet-based LAN, while server-to-server IPC may be supported over one of various interconnects such as InfiniBand or Myrinet. A typical server in a high-performance data center has multiple interfaces (Ethernet, FC, InfiniBand) to allow it to be connected to the various disparate networks.

With data centers becoming bigger and more complex, managing different interconnect technologies for traffic from each application is becoming cost- and resource-intensive. With recent advances in speeds of Ethernet (10 Gbit/s is already standard and 40 Gbit/s and 100 Gbit/s are in development) it has become an attractive choice as the technology of convergence in the data center.

Another motivating factor for convergence is the consolidation of servers brought about by the advent of blade servers. Today, blade servers need to accommodate their backplane designs to support multiple interconnect technologies. Using a single interconnect technology such as Ethernet can simplify backplane designs, thereby reducing overall costs and power consumption.

However, current standards-based Ethernet networks cannot provide the service required by traffic from storage and high-performance computing applications. To understand the current limitations and enhancements required, one needs to know about the current state of Ethernet, the high-level requirements of a converged data center network, enhancements needed to current Ethernet, and relevant standards.

State of Ethernet Circa 2007

* Ethernet networks do not provide a lossless transport. Technically 802.3x PAUSE can be used for this, but because it affects all traffic (including control traffic and traffic that can tolerate loss), it is usually turned off.
* Ethernet switches based on the IEEE 802.1Q standard use static priority for scheduling traffic, which works well for current LAN environments (control > voice > data), but can potentially cause starvation of lower priorities. However, the scheduler does not provide minimum bandwidth guarantees or maximum bandwidth limits. As a result, it does not allow control over the sharing of bandwidth across different traffic classes.
* Ethernet bridged LANs typically employ one of the variants of spanning tree protocol (STP, RSTP, or MSTP). As a result, the path from a source to destination is not always the shortest path. Further, ECMP is not supported.

Converged Data Center Network Requirements

* The applications developed for transport over existing storage networks demand a low-latency, lossless network.
* High-performance computing nodes need a very high throughput, low-latency, lossless network for server-to-server communication.
* Client-server applications need a scalable TCP/IP-friendly network.
* Storage and HPC networks make extensive use of ECMP to maximize use of network resources.

Enhancements to Ethernet Required

* Data Center Ethernet needs a more flexible scheduling algorithm that will allow sharing of bandwidth between lossy and lossless traffic classes while still achieving traffic differentiation.
* A combination of link-level flow control and end-to-end congestion management is required to achieve lossless behavior. In the absence of end-to-end flow control, link-level flow control can lead to congestion spreading and deadlock. Link-level flow control needs to be enhanced to operate per priority.
* Ethernet switch control plane needs to adopt protocols and algorithms to achieve shortest path forwarding and ECMP.

Relevant Ethernet standards for supporting Data Center Ethernet

* IEEE 802.1p/Q - 8 traffic classes for priority based forwarding.
* [http://www.ieee802.org/1/pages/802.1au.html IEEE 802.1Qau] – End to end congestion management.
* IEEE 802.3x – A PAUSE mechanism providing on/off link level flow control.
* [http://www.ieee802.org/1/pages/802.1aq.html IEEE 802.1aq] – Shortest path bridging
* IETF [http://www.ietf.org/html.charters/trill-charter.html TRILL] – Transparent interconnection of lots of links.
* T11 FCoE – Fibre Channel over Ethernet. This effort utilizes the existing Fibre Channel protocols to run on Ethernet to enable servers to have access to Fibre Channel storage via Ethernet.
* IEEE New – IEEE 802.1 is currently investigating the possibility of enhanced transmission selection for providing more sophisticated controls for bandwidth sharing between traffic classes, as also for per priority link-level flow control.

Companies Actively Contributing to the Development of Data Center Ethernet

The following companies are developing products and actively participate in standards related to Data Center Ethernet: Brocade, Cisco, EMC, Emulex, Force10 Networks, Fujitsu, IBM, Intel, Mellanox, Myricom [http://www.myricom.com] , Nuova Systems, Sun Microsystems [http://www.sun.com] , Teak Technologies, and Woven Systems.


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Data center bridging — (DCB) refers to a set of enhancements to Ethernet local area networks for use in data center environments. Specifically, DCB goals are, for selected traffic, to eliminate loss due to queue overflow and to be able to allocate bandwidth on links.… …   Wikipedia

  • Data center — Centre de traitement de données Un centre de traitement des données est un service généralement utilisé pour remplir une mission critique relative à l informatique et à la télématique. Il comprend en général un contrôle sur l environnement… …   Wikipédia en Français

  • Ethernet — An 8P8C modular connector (often called RJ45) commonly used on cat 5 cables in Ethernet networks Ethernet   …   Wikipedia

  • Ethernet flow control — WireShark screenshot of an ethernet Pause frame Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. Contents 1 …   Wikipedia

  • Ethernet frame — A data packet on an Ethernet link is called an Ethernet frame. A frame begins with Preamble and Start Frame Delimiter. Following which, each Ethernet frame continues with an Ethernet header featuring destination and source MAC addresses. The… …   Wikipedia

  • Ethernet crossover cable — Gigabit T568B crossover cable ends An Ethernet crossover cable is a type of Ethernet cable used to connect computing devices together directly. Normal straight through or patch cables were used to connect from a host network interface controller… …   Wikipedia

  • Ethernet over twisted pair — cable (upper) and 8P8C plug (lower) Ethernet over twisted pair technologies use twisted pair cables for the physical layer of an Ethernet computer network. Other Ethernet cable standards employ coaxial cable or optical fiber. Early versions… …   Wikipedia

  • Ethernet — im TCP/IP‑Protokollstapel: Anwendung HTTP IMAP SMTP DNS … Transport TCP UDP Internet …   Deutsch Wikipedia

  • Ethernet — /ee theuhr net / Trademark. a local area network protocol featuring a bus topology and a 10 megabit per second data transfer rate. * * * Telecommunications networking protocol introduced by Xerox Corp. in 1979. It was developed as an inexpensive… …   Universalium

  • Fibre Channel over Ethernet — FCoE: typische Topologie bei Einsatz im Access Layer Fibre Channel over Ethernet (FCoE) ist ein Protokoll zur Übertragung von Fibre Channel Rahmen in Vollduplex Ethernet basierten Netzwerken. Das wesentliche Ziel bei der Einführung von FCoE dient …   Deutsch Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”