Rethink the Campus: No Chassis Required for Future-Proof Networking

Campus networks of today are seeing continued explosive data growth with applications such as voice, video, collaboration and virtual desktop. At the same time employees are expecting access to corporate resources and internet through smart phones, tablets and personal devices. This poses unique challenges for network administrators when they need to support applications 24x7x365. Every dimension of the network must support large scale across both wired and wireless; users are not just employees, but customers, partners, vendors, and devices. Finally, all this needs to be done at dramatically lower cost than seems possible with current networking products and technologies.  This is causing a paradigm shift and rethinking about how campus networks for tomorrow must be designed.

Traditional campus networks support three-tier, access-aggregation-core, architecture where the campus aggregation or distribution layer primarily located in an intermediate distribution frame (IDF) that funnels transactions from multiple access layer switches to the network core.  The campus core is typically located in a main distribution frame (MDF) and is the gateway between the campus and external networks. The majority of campus traffic passes through the core; thus it is vital that the core be highly available, predictable, and manageable. Most core switches, designed for an earlier time when Gigabit Ethernet was the best thing around, deliver a limited number of 10 GbE ports that are becoming a new norm. While the limited port densities offered on these devices may have been sufficient at some point, constant network expansion means that they have long outgrown their efficacy.

The requirements that drive the new aggregation/ core layer are

  • High availability
  • High performance and port density
  • Switch capacity and forwarding even when half of the layer fails
  • Avoid pushing L3 traffic to edge
  • Ability  to converge the core and aggregation layers into a single (core) layer, creating a two-tiered model
  • Improved TCO

The Dell PowerConnect 8100 series switches address all these requirements in fixed port form factors. With these switches Dell Networking can support build outs for present and future campus networks. When combined with hardware high-availability with redundant hot-swappable fans and power supplies, features such as fast failover and non-stop forwarding process PowerConnect 8100 switches deliver the true hardware stacking and resiliency. When compared to virtual switches this approach delivers failover times in milliseconds rather than seconds. PowerConnect 8100 switches deliver high port density with up to 64 10 GbE ports in 1 RU form factor and have higher backplane capacity than traditional chassis based aggregation/ core switches. In addition, it also supports local preference for LAGs where one or more physical links of the same speed can be aggregated together to form a single logical link making it a very useful feature for topologies where LAG is split across stack units thus eliminating the need for pushing L3 traffic to edge while achieving active-active non-stop networking at the aggregation.

To see the features of PowerConnect 8100 series switches in more detail, check out this overview video:

Finally, campus networks designed with PowerConnect 8100 have much better TCO when compared with the chassis based architectures with ½ CapEx investment and significant OpEx reduction due to 1/6th  power requirements. Also, with modularity that supports 40 GbE ports allow PowerConnect customers to future-proof their campus networks as need without having to dig deeper into their pockets upfront.

Although there may be specific applications and use cases where chassis-based aggregation/ core products may be needed in campus networks, the majority of new campus designs can be built with fixed port products that support the same feature set at much lower price point.

For more information, please visit the PowerConnect 8100 page on Dell TechCenter and continue the conversation on twitter by following @DellNetworking.

About the Author: Ashish Malpani