Tag: Nexus

Building Data Center Fabric: Junos Fusion vs Cisco FEX

Last month at Networking Field Day (NFD10) Juniper presented their Junos Fusion solution which brings simplicity to the data center by giving you a single pane of glass for managing all the switches in the fabric and allows you to upgrade all the switches from a central interface. 
 
With Junos Fusion, all the access switches (called Satellite devices) are managed from a single or a pair of aggregation devices. The access devices can be either EX 4300 or QFX 5100 series switches and run a Windriver Linux distribution, known as the Linux Forwarding Operating System (LFOS), which is decoupled from the Junos operating system running on the aggregation devices. The aggregation devices are the new QFX 10000 series and run classic Junos. Juniper uses LLDP between the aggregation and access devices for auto discovery/provisioning and 802.1br+ for configuring and monitoring the ports. They also use Netconf between the aggregation devices to sync the configurations.  
 
The feature itself is not new, the Juniper MX edge routers have supported this feature for a while but Juniper just extended the support to their data center switches this year. 
 
From operational perspective, Junos Fusion is very similar to Cisco Fabric Extender (FEX). They both make the fabric look from the outside as a big switch with a single IP address.

So if you are building a data center fabric, should you go with Junos Fusion or Cisco FEX? Well there are few things to consider when comparing the two architectures. Here is a few things to think about:
 
Port Density: how many server ports do you need? Both Junos Fusion and Cisco FEX architectures support today up to 64 access switches per fabric. The Nexus 2200 (FEX) has only 48 extended (server) ports which gives you a total of 3072 (64 x 48) ports per fabric while the QFX 5100-96S has 96 server ports which gives you a maximum of 6144 (64 x 96) ports per fabric so the Junos Fusion architecture clearly scales better when it comes to port density
 
Support For Local Switching & Other Features In the Access Layer: The Cisco Nexus 2200 has no brain and therefore has no support for local switching, VLAN tagging, or any other features you typically see in an access switch. It’s an “extender” and doesn’t not have ASICs to switch traffic. The QFX 5100/EX 4300 on the other hand are full blown switches with ASICs & intelligent software and support all the features mentioned above and more. L3 routing is not supported today on the QFX 5100/EX 4300 in Fusion mode, however Juniper stated that this feature is on the roadmap.
 
The need for local switching is a good debate to have. Some people argue that the Nexus 2200 is not a good fit for the data center because it cannot do local switching, however this is not a fair assessment in my opinion. Traffic patterns in the data center depend heavily on the type of workloads. Some workloads like Hadoop generate heavy east-to-west traffic within the same VLAN and in such case it’s recommended to keep all the server nodes on the same TOR to switch traffic locally and avoid congesting the uplinks. However many of the other workloads (Web applications namely) don’t generate heavy east-west traffic within the same VLAN. 
 
The other thing to keep in mind is that with server virtualization the edge of the network is moving to the hypervisor and much of that intra-VLAN traffic is getting switched in kernel by the hypervisor without leaving the physical host therefore making the need for local switching unnecessary. Even inter-VLAN traffic can now get routed without leaving the physical host if you have a virtual distributed firewall / router.
 
Ivan Pepelnjak has a nice blog post on the need for distributed switching in the Nexus 2000.
 
Cost: This is where the Nexus 2200 really shines. Because it’s an extension and does not have full software/hardware capabilities, it’s very affordable and can reduce your CapEx substantially. 
 

My Take: 

Both the Junos Fusion and Cisco FEX architectures simplify managing data center networks. When comparing the two solutions, examine your workload requirements, determine how intelligent your TORs need to be, and from there you can decide which solution best works for you. 

Here is the Junos Fusion presentation from NFD10:

 


Your Turn Now
 
What are you thoughts on this? Have you deployed either solution? I want to hear from you.

Disclaimer: I attended Networking Field Day 10 as a delegate. Vendors sponsoring the event indirectly covered my travel expenses, however I’m not required to write about their products or about the event. If I do write something, it’s because I want to express my opinions.


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Cisco Nexus 7000 FAQ

Cisco Nexus FAQ

Nexus 7000

Q. Is the FAB1 module supported with SUP2 or SUP2E?

A. Yes, supported with both supervisors.

Q. What minimum software release do I need to support SUP2 or SUP2E?

A. NX-OS 6.1

Q. Can I still run NX-OS version 6.0 on SUP1?

A. Yes.

Q. Can I upgrade SUP2 to SUP2E?

A. Yes. You would need to upgrade both the CPU and memory on board.

[8/14/2014] update: after further investigation I found that the answer is no (upgrade is not possible). 

Q. I need to enable high-availability (HA), can use one SUP1 with one SUP2 in the same chassis?

A. No, for high-availability the two supervisors must be of the same type so you would need to use either SUP1/SUP1 or SUP2/SUP2.

Q. How many I/O modules can I have in a 7004?

A. Maximum of 2. The other 2 slots are reserved for the supervisors and you cannot use them for I/O modules.

Q. FAB1 or FAB2 on 7004?

A. The Nexus 7004 chassis does not actually use any FAB’s. The I/O modules are connected back to back.

Q. How many FEX’s can the Nexus 7000 support?

A. 32 FEX’s with SUP1 or SUP2; and 48 FEX’s with SUP2E.
[8/14/2014] update: 64 FEXs with SUP2E or SUP2 

Q. How many VDC’s can the Nexus 7000 support?

A. 4 VDC’s (including 1 VDC for management) with SUP1 or SUP2; and 8 + 1 (management) VDC’s with SUP2E.

Q. Which modules support FabricPath, FCoE, and FEX connectivity?

A. FabricPath is supported on all F1 and F2 modules. FCoE is supported on all F1 modules and F2 modules except on the 48 x 10GE F2 (Copper) module. FEX is supported on all F2 modules. Use this link from Cisco as a reference.

[8/14/2014] update: The F2e module supports FCoE, FEX, and FabricPath. The F3 module (12 port 40GE) supports FEX, FabricPath, FCoE, OTV, MPLS and LISP. 

Q. Which modules support LISP, MPLS, and OTV?

A. All M1 and M2 modules support MPLS and OTV. LISP is supported only on the 32 x 10GE M1 module.

Q. Does the Nexus 7004 support SUP1?

A. No, the Nexus 7004 supports only SUP2 and SUP2E.

Q. Can I place an F2 module in the same VDC with F1 or M module?

A. No, the F2 module must be placed in a separate VDC so if you plan to mix F2 with F1, and M modules in the same chassis you would need a VDC license.

[8/14/2014] update: The F2e and F3 (12 port 40GE) modules can interoperate with the M-series in the same VDC.  

Q. How can I upgrade from FAB1 to FAB2 modules during operation without any service disruption?

A. Yes, if you replace each module within a couple of minutes. Just make sure to replace all FAB1 with FAB2 modules within few hours. If you mix FAB1 with FAB2 modules in the same chassis for a long time, the FAB2 modules will operate in backward compatible mode and downgrade their speed to match the FAB1 modules peed. You can follow this link for step-by-step procedure for upgrading the FAB modules on the Nexus 7000.

Q. Can I use FAB1 modules in a 7009 chassis?

A. No, the Nexus 7009 uses only FAB2 modules.

Q. Does the Nexus 7000 support native Fibre Channel (FC) ports?

A. No, FC ports are not supported on the Nexus 7000. You would need either the Nexus 5500 or the MDS 9000 to get FC support.


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

How To Connect HP BladeSystem c7000/c3000 To Cisco Unified Fabric

When deploying blade servers it’s always recommended to use blade switches in the chassis for cabling reduction, improved performance and lower latency. However blade switches increase the complexity of the server access layer and introduce extra layer between the servers and the network. In this post I will go though few options for connecting the popular HP C-class BladeSystem to a Cisco unified fabric.

HP Virtual Connect Flex-10 Module

HP BladeSystem c7000 with Flex10 VirtualConnect

The Virtual Connect Flex-10 Module is a blade switch for the HP BladeSystem c7000 and c3000 enclosures. It reduces cabling uplinks to the fabric and offers oversubscription ratio of 2:1.

It has (16) 10GE internal connectors for downlinks and (8) 10Gb SFP+ for uplinks. This module however does not support FCoE so if you are planning on supporting Fibre Channel (FC) down the road you would need to add separate module to the chassis for storage. It also does not support QoS so that means you will need to carve up manually the bandwidth on the Flex-NICs exposed to the vSphere ESX kernel for vMotion, VM data, console, etc. This could be inefficient way of bandwidth assignment as the Flex-NIC would get only what’s assigned to it even if the 10G link is idle.

This module adds additional management point to the network as it has to be managed separately from the fabric (usually by the server team). The HP Virtual Connect Flex-10 module is around $12,000 list price.

 

HP 10GE Pass-Thru Module

HP BladeSystem c7000 10G Pass thru with Cisco Nexus

The HP 10GE Pass-Thru module for the BladeSystem c7000 and c3000 enclosures acts like a hub and offers 1:1 oversubscription ratio. It has 16 connectors for downlinks and (16) 1/10GE uplink ports. It supports FCoE and the uplink ports accept SFP or SFP+ optics.

As shown the picture above this module can be connected to a Nexus Fabric Extender (FEX) such as the Nexus 2232PP which offers (32) 10GE ports for server connectivity or you can connect the module to another FEX with support for only 1GE downlinks if your server do not need the extra bandwidth. This solution is more attractive than the first option of using Virtual Connect Flex-10 module because it’s pass-thru and supports FCoE so you would not need another module for storage. And because it’s a pass-through it wouldn’t act like a “man in the middle” between the fabric and the blade servers.

Finally with this solution you have the option of using VM-FEX technology on the Nexus 5500 since both the HP pass-thru module and the Nexus 2200 FEX are transparent to the Nexus 5500. This module is around $5000 list price.

 

Cisco Fabric Extender (FEX) For HP BladeSystem

HP BladeSystem c7000 with Cisco Nexus B22 FEX

The Cisco B22 FEX was designed specifically to support the HP BladeSystem c7000 and c3000 enclosures. Similar to the Cisco Nexus 2200 it works like a remote card and is managed from the parent Nexus 5500 eliminating multiple provisioning and testing points. This FEX has 16 internal connectors for downlinks and (8) 10GE uplink ports. It does support FCoE and the supported features on it  are on par with the Nexus 2200.

By far this is the most attractive solution for connecting HP BladeSystem to a Cisco fabric. With this solution you need to manage only the Nexus 5500 switches and you have support for FCoE, VM-FEX, and other NX-OS features. This B22 FEX is sold by HP (not Cisco) and its priced around $10,000 list price. The Nexus 5500 supports up to 24 fabric extenders in Layer 2 mode and up to 16 fabric extenders in Layer 3 mode.

 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Cisco Nexus 2000/7000 vPC Design Options

When building data center networks using Cisco Nexus switches you can choose to attach the Nexus 2000 Fabric Extender (FEX) to a Nexus 5000 or 7000 depending on your design requirements and budget. In a previous post I briefly described the benefits of Virtual PortChannel (vPC) and discussed design options for the Nexus 2000/5500/5000. In this post I will go over the vPC design options for the Nexus 2000/7000 and important things to consider while creating the design.

Without vPC

Cisco Nexus 2000/7000 Without vPC

The picture above shows how you can connect a Nexus 2000 to its parent switch Nexus 7000 without using vPC. Topology A on the left shows a  single attached Nexus 2000 to a  7000 and a server connected to a server port on the Nexus 2000. There is no redundancy in this topology and failure of the Nexus 7000 or 2000 would cause the server to lose connectivity to the fabric. In this design you can have up to 32 FEX’s per Nexus 7000 with Sup1/2 or 48 FEX’s with Sup2E.

Topology B on the right has also no vPC and NIC teaming in this case is used for failover. The solid blue link is the primary connection and the dotted link is the backup. It’s up to the OS on the server to detect any failure upstream and fail over to the backup link. Similar to A in this design you can have up to 32 FEX’s per Nexus 7000 with Sup1/2 or 48 FEX’s with Sup2E.

 

With vPC

 

Cisco Nexus 2000/7000 vPC Design

The picture above hows the supported vPC topology for the Nexus 7000. Topology C is called straight-through vPC in which each Nexus 2000 (FEX) is connected to one parent Nexus 7000 while the server is dual attached to a pair of Nexus 2000. In this case NIC on server must support LACP so that the two FEX’s appear as a single switch. Most modern Intel and HP NIC’s support LACP today. This topology supports up to 64 FEX’s (32 per Nexus 7000) with Sup1/2 or 96 FEX’s (48 per Nexus 7000) with Sup 2E.

Maximum Supported Nexus FEX As of Today:

Nexus 7000
Without vPC32 with Sup1/2; 48 with Sup2E
Straight-through64 with Sup1/2 (32 per Nexus 7000); 96 with Sup2E (48 per Nexus 7000)

Notes:

  • The  Nexus 7000 modules that support FEX are: N7K-M132XP-12L (32 x 10GbE SFP+), N7K-F248XP-25 (48 x 10GbE SFP/SPF+), and all M2 modules. The F1, F2 copper, and 1GbE M1 modules don’t support FEX
  • All FEX uplinks must be placed in the same VDC on the Nexus 7000
  • Dual attaching the FEX to pair of Nexus 7000 is not supported as of today on the Nexus 7000 which means in the event of I/O module failure all FEX’s hanging off of that module will be knocked out. For this reason it’s recommended to have at least two IO modules in the chassis that support FEX and distribute the uplinks across those two modules for redundancy
  • If the FEX is going to be within 100 meters from the Nexus 7000, you can use Cisco Fabric Extended Transceiver (FET) on the uplinks which offers cost-effective way to connect the FEX to its parent switch. The FET is much cheaper than the 10G SFP+ optic

 


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Ultra-low-latency ToR Switches For Building Scalable Leaf-Spine Fabrics

When building scalable Leaf-Spine fabrics, network architects look for low-latency, high-density switches to use at the leaf layer. There are many fixed switches that can be used for Top-0f-rack (ToR) at the leaf layer to provide connectivity upstream to the spine layer. What I’m about to compare are 3 ultra-low-latency ToR switches based on merchant silicon available in the market today for that purpose.

Cisco Nexus 3064 

The 3064 is 1 RU heigh and has a low latency and low power consumption per port. It has (48) 1/10GbE ports and (4) 40 GbE uplinks which can be each used in native 40 GbE or split into four 10GbE ports. It runs the same Nx-OS as the Nexus 7000 and 5000 series.

The Nexus 3064 is Cisco’s first switch in the Nexus family to use merchant silicon (Broadcom Trident+ chip). I’m curious to see whether Cisco will continue to use merchant silicon in future products or stick to their propreitery Nuova ASIC of the 7000 and 5000 series.

 

Arista 7050S-64

Arista 7050S-64 is very similar to the Cisco Nexus 3064 in terms of latency, interface types, and switching capacity. Its power consumption is less than the Nexus 3064 though. Arista’s fixed switches are known for their low power consumption and the 7050S-64 is no exception. Its power consumption is under 2W per port. You really cannot beat that!

 

Dell Force10 S4810

The Dell Force10 S4810 is another great ToR switch that can be used to build leaf-spine fabrics. It offers the same interface types as the Nexus 3064 and Arista 7050s-64; and similar form factor. It does however have slightly higher power consumption per port.

 

Ultra-low-latency 10/40 GbE Top-of-Rack Switches

Cisco Nexus 3064Arista 7050S-64Dell Force10 S4810
Ports48 x 1/10GbE SPF+ and 4 x 40GbE QSFP+48 x 1/10GbE SPF+ and 4 x 40GbE QSFP+48 x 1/10GbE SPF+ and 4 x 40GbE QSFP+
Packet Latency (64 bytes)824ns800ns700ns
OSNx-OSArista EOSFTOS
Form Factor1 RU1 RU1 RU
Switching Capacity1.28 Tbps1.28 Tbps1.28 Tbps
Power Supply2 Redundant & Hot swappable power supplies2 Redundant & Hot swappable power supplies2 Redundant & Hot swappable power supplies
Typical Operating Power177W103W220W
Full Data SheetData SheetData SheetData Sheet


Share This:
Facebooktwitterredditpinterestlinkedintumblrmail

Cisco Nexus 2000/5000 vPC Design Options

Virtual PortChannel (vPC) allows two links that are connected to two different physical Cisco Nexus 5000 or 7000 switches to appear to the downstream device as a single PortChannel link.  That downstream device could be a server, Nexus 2000, or any Classical Ethernet switch.

vPC is useful to prevent spanning tree from blocking redundant links in the topology. After all you  have spent fortune and bought those expensive 10G ports and the last thing you want is for spanning tree to block them.

Having said that they are several ways to connect the Cisco Nexus Fabric Extender (FEX) to its parent the Nexus 5000 or 7000 switch. In this post I’m going to discuss supported vPC topologies for the Nexus series. I’m going to start with the Nexus 2000/5000 now and will add a separate post for the Nexus 2000/7000 options later.

 

Without vPC

Cisco Nexus 2000/5000 Without VPC

The picture above shows the supported non-vPC topologies. Topology A on the left shows a straight forward connectivity between Nexus 2000 and 5000 with a server connected to a server port on the Nexus 2000. There is no redundancy in this topology and failure of the Nexus 5000 or 2000 would cause the server to lose connectivity to the fabric. In this design you can have up to 24 FEX’s per Nexus 5500 in L2 mode and 16 FEX’s in L3.

Topology B on the right has also no vPC and NIC teaming in this case is used for failover. The solid blue link is the primary connection and the dotted link is the backup. It’s up to the OS on the server to detect any failure upstream and fail over to the backup link. Similar to A in this design you can have up to 24 FEX’s per Nexus 5500 in L2 mode and 16 FEX’s in L3.

 

With vPC

Cisco Nexus 2000/5000 VPC

The picture above hows the supported vPC topologies for the Nexus 5000. Topology C is called straight-through vPC in which each Nexus 2000 (FEX) is connected to one parent Nexus 5000 while server is dual-homed. In this case NIC on server must support LACP so that the two FEX’s appear as a single switch. Most modern Intel and HP NIC’s support LACP today. This topology supports up to 48 FEX’s (24 per Nexus 5500) in L2 mode and 32 FEX’s (16 per Nexus 5500) in L3 mode.

In topology D on the other hand each FEX is dual-homed and so is the server. So the NIC on the server must support LACP as in C. In this topology you can have up to 24 FEX’s in L2 mode and 16 FEX’s in L3.

Topology E is similar to D where each FEX is dual-homed but the server is single-homed. In this topology you can have up to 24 FEX’s in L2 mode and 16 FEX’s in L3.

 

Maximum Supported Cisco FEX As of Today:

Nexus 5000Nexus 5500
Without vPC (L2 Mode)1224
Without vPC (L3 Mode)X16
Straight-through (L2 Mode)24 (12 per Nexus 5000)48 (24 per Nexus 5500)
Straight-through (L3 Mode)X32 (16 per Nexus 5500)
Dual-homed FEX (L2 Mode)1224
Dual-homed FEX (L3 Mode)x16

Share This:
Facebooktwitterredditpinterestlinkedintumblrmail