An additional spine switch can be added, and uplinks can be extended to every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the oversubscription. If oversubscription of a link occurs (that is, if more traffic is generated than can be aggregated on the active link at one time), the process for expanding capacity is straightforward. If one of the top tier switches were to fail, it would only slightly degrade performance throughout the data center. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. Every leaf switch connects to every spine switch in the fabric. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. The leaf layer consists of access switches that connect to devices such as servers. In this two-tier Clos architecture, every lower-tier switch (leaf layer) is connected to each of the top-tier switches (spine layer) in a full-mesh topology. This architecture has been proven to deliver the high-bandwidth, low-latency, nonblocking server-to-server connectivity.įigure 4 shows a typical two-tiered spine-and-leaf topology. Another challenge in a three-tier architecture is that server-to-server latency varies depending on the traffic path used.Ī new data center design called the Clos network–based spine-and-leaf architecture was developed to overcome these limitations. However, vPC can provide only two active parallel uplinks, and so bandwidth becomes a bottleneck in a three-tier data center architecture. This traffic needs to be handled efficiently, with low and predictable latency. With virtualized servers, applications are increasingly deployed in a distributed fashion, which leads to increased east-west traffic. Servers are virtualized into sets of virtual machines that can move freely from server to server without the need to change their operating parameters. With Layer 2 segments extended across all the pods, the data center administrator can create a central, more flexible resource pool that can be reallocated based on needs. VPC technology works well in a relatively small data center environment in which most traffic consists of northbound and southbound communication between clients and servers.ĭata center design with extended Layer 3 domain With vPC technology, Spanning Tree Protocol is still used as a fail-safe mechanism.
#Separation studio free trial full#
vPC eliminates the spanning-tree blocked ports, provides active-active uplink from the access switches to the aggregation routers, and makes full use of the available bandwidth, as shown in Figure 2.
In 2010, Cisco introduced virtual-port-channel (vPC) technology to overcome the limitations of Spanning Tree Protocol. However, Spanning Tree Protocol cannot use parallel forwarding paths, and it always blocks redundant paths in a VLAN. VLANs are extended within each pod that servers can move freely within the pod without the need to change IP address and default gateway configurations. Spanning Tree Protocol provides several benefits: it is simple, and it is a plug-and-play technology requiring little configuration.
Between the aggregation routers and access switches, Spanning Tree Protocol is used to build a loop-free topology for the Layer 2 part of network. The architecture consists of core routers, aggregation routers (sometimes called distribution routers), and access switches. Traditional three-tier data center design