Home » Networking
Category Archives: Networking
Before I start, its been a while since my last post, mainly because I have been really busy with work and family. Now hopefully I will make it habit to post something useful once every few weeks.
Disclaimer: This is not the **official** recommendation from Nutanix on Cisco ACI. This is just something that I worked on for a client of ours and thought would be useful for anyone who might end up deploying Nutanix + Cisco ACI + new vCenter on Nutanix NDFS :)..
Problem Statement: Cisco ACI requires OOB access to vCenter to deploy the Cisco ACI networks as Portgroups in vCenter. Build vCenter on NDFS. NDFS needs 10 Gb fabric (ideally) from each node. All the uplinks in the leaf switch are controlled by Cisco ACI. But ACI needs vCenter to push out Management and VM Network VLANs.
Some of you might see this and go uh oh, but let me assure you, this also becomes a problem in non Nutanix environments as well; especially anyone using IP based storage and only have 2 x 10Gb adapters.
There are 2 ways in which we can take care of this.
Option 1: Deploy a vCenter in a management only cluster, which doesn’t depend on the Cisco ACI for Networking. (Needs seperate physical infrastructure for networking and for management cluster)
Option 2: Add another dual 10Gb Nic to each of the nodes. (becomes a lot more expensive when you think of tens of nodes x 4 10Gb adapters).
Both the above options are quite costly, be it from a networking physical infrastructure point of view or a management only cluster point of view.
So how do we go about solving this?
Time and again we come across a solution thats been engineered perfectly. One such example is VMware dvSwitch framework. It has reduced what could have been a very tedious exercise when deploying huge VMware environments. The other advantage is that it has provided a framework for extensibility options for the likes of Cisco and IBM to develop a “virtual switch”. I don’t have hands on experience with the IBM version of the dvSwitch so am not going to comment on that.
I am writing this blog to summarise what my experience has been with the various deployment scenarios for Cisco Nexus 1000v and its parent, the VMware dvSwitch.
Cisco Nexus 1000v
Cisco has been a front runner when it comes to networking and everything related to it, security, data centre components, servers etc. So its no surprise that Cisco is the first company to use the framework of dvSwitch to come up with “Nexus 1000v”. Here on this is called Nexus 1kV. This uses the same OS (NX-0S) as the Nexus 3000, 5000 and 7000 series switches.
Where to use:
- Network Support team needs to have absolute control over everything networking related.
- Strict COS (QoS) settings to be enabled on each traffic type or QoS to be provided to VoIP, certain VM Networks etc (although this can be done using Custom Network Resource Pools its not the same thing when it comes to real time data).
- Multi Tenant environment where use of Cisco Virtual Secure Gateway is mandatory. (VSG uses the Nexus 1000v to implement ACL rules etc).
- Any other environment where Cisco Virtual appliances are necessary.
A lot of the points above are also the advantages of using Cisco Nexus 1kV. Now lets look at some disadvantages (from my experience, they might not be disadvantages at all).
- Single VSM per VDC Construct. (Yes you can span to 12 per vCenter but why can’t we use multiple VSMs per vCenter in a single VDC Construct.)
- Limited number of hosts per VSM (128).
- Can’t be managed by GUI (not necessarily a bad thing but most VMware Admins aren’t Cisco guys).
- Layer 2 / Layer 3 deployment modes can be confusing.
- Cant have multiple port-groups with same VLAN tag. (Don’t ask me why I said that)
- Licensing can be a nightmare to manage depending on the deployment model.
- Packet / Control VLANs required for Layer 2 Deployment Mode.
- Code Upgrade can be quite scary (if you are not a cisco person).
- You’ve gotta have Enterprise Plus to use Nexus. (This is not really a dis-advantage, but I will discuss this more in the dvSwitch Section).
I have recently worked in place where the deployment model became so confusing that I recommended the use of dvSwitch instead of Nexus 1000v, especially since the customer wasn’t using the Nexus 1000v to do what it does best.
VMware dvSwitch (The big daddy of Nexus 1000v)
Does anyone remember how painful it was to manage networking on ESX 3.5 and prior if you had environments which spanned hundreds of ESX servers. I do. When I first started working with VMware product, I got baptised starting with ESX 3.0.2 running vCenter Server 2.0 (or 2.5 can’t remember). We had a scary number of scripts (not PowerCLI, scripts that ran in the Console OS of ESX) with which we managed networking across close to 150 servers.
With the introduction of dvSwitch the nightmare of managing multiple port groups across several hundred hosts is insanely easier. There were a few hiccups along the way, before the health check feature was introduced in 5.1. Ok lets get back to 2014.
In my opinion, VMware dvSwitch can be used in any given environment, whether there are security requirement to deploy firewalls or have IPS devices within the virtual network environment. With NSX, all these things are going to become a lot easier and a lot more manageable. The only caveat is that there is a requirement of using Enterprise Plus licensing, well doesn’t everyone use it already?
Where to use:
- Everywhere… i mean really EVERYWHERE. 🙂
- With Enterprise Plus licensing, you can use NIOC. (QoS by VMware )
- Multiple dvSwitches in each VDC/vCenter. I think the max is 512 switches, you can look up the maximums on the VMware support page.
- No further licensing requirements (apart from Enterprise plus).
- No deployment modes to be considered.
- Multiple port groups distributed across multiple datacenters with the same VLAN ID.
- Use of PowerCLI to manage and migrate VMs/Hosts/etc.
- Can shape inbound (RX) traffic
- Has a central unified management interface through vCenter Server
- Supports Private VLANs (PVLANs)
- Provides potential customization of Data and Control Planes
- Increased visibility of inter-virtual machine traffic through Netflow
- Improved monitoring through port mirroring (dvMirror)
- Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.
- The enhanced link aggregation feature provides choice in hashing algorithms and also increases the limit on number of link aggregation groups
- Additional port security is enabled through traffic filtering support.
- Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.
Yep some of these have been mentioned in the support page as well.
Let me explain a little more about my comment on Enterprise Plus licensing. My thoughts on the subject of using Nexus 1kV are quite clear, if you want segregation, advanced COS settings, use Cisco VSG, etc etc, you should use Nexus 1kV. But if you are not using any of these, why would you want to pay more money to use a Nexus 1kV, while you can use dvSwitch, which more or less gives you the same base features. After all, the 1kV has been developed using the dvSwitch Framework.
I am happy to be corrected of anything that I might have said wrong, but in my opinion, Nexus 1kV is slowly loosing its “charms” as dvSwitch starts to ramp up the integration with NSX and vCNS components. But when it comes time to choose which one to use, the answer is, “it depends on what the customer requirements are”.