Home » vSphere
Category Archives: vSphere
Before I start, its been a while since my last post, mainly because I have been really busy with work and family. Now hopefully I will make it habit to post something useful once every few weeks.
Disclaimer: This is not the **official** recommendation from Nutanix on Cisco ACI. This is just something that I worked on for a client of ours and thought would be useful for anyone who might end up deploying Nutanix + Cisco ACI + new vCenter on Nutanix NDFS :)..
Problem Statement: Cisco ACI requires OOB access to vCenter to deploy the Cisco ACI networks as Portgroups in vCenter. Build vCenter on NDFS. NDFS needs 10 Gb fabric (ideally) from each node. All the uplinks in the leaf switch are controlled by Cisco ACI. But ACI needs vCenter to push out Management and VM Network VLANs.
Some of you might see this and go uh oh, but let me assure you, this also becomes a problem in non Nutanix environments as well; especially anyone using IP based storage and only have 2 x 10Gb adapters.
There are 2 ways in which we can take care of this.
Option 1: Deploy a vCenter in a management only cluster, which doesn’t depend on the Cisco ACI for Networking. (Needs seperate physical infrastructure for networking and for management cluster)
Option 2: Add another dual 10Gb Nic to each of the nodes. (becomes a lot more expensive when you think of tens of nodes x 4 10Gb adapters).
Both the above options are quite costly, be it from a networking physical infrastructure point of view or a management only cluster point of view.
So how do we go about solving this?
VSPEX Blue is EMC’s EVO:RAIL. They have made enough modifications to warrant a special mention but they haven’t tried to include any storage ‘appliance’ unlike NetApp so the base EVO architecture remains unchanged. Sorry this isn’t a dig at NetApp but stating the obvious. Duncan Epping has a lot of useful information about EVO:RAIL and Cormac Hogan has an excellent blog series about VMware VSAN. So I am not going to explain what EVO:RAIL architecture is and how VMware VSAN works. (more…)
TPS: Transparent Page Sharing. What does it do ? Well think about all the “memory blocks” that are used by the VMs inside an ESXi Host RAM. These blocks “map” certain areas running inside VM RAM on the host.
What happens when you a certain number of VMs running a single kind of OS using the same “memory blocks” over and over again? Thats right, wastage of precious RAM, well used to precious a few years ago but not today, when my phone has more RAM than my first desktop.
So VMware have decided, after much deliberation I am sure, that TPS won’t be enabled by default in the next version of vSphere (vSphere 6.0). Its also made it very clear, that a number of patches will be released to ensure the same thing is done on the pre-existing systems that are in production today (ESXi 5.0+).
After applying the patch, will it affect the amount of RAM being utilised in the hosts? Yes. It will be. If you look at the hosts today, you will be able to see how much TPS has been helping in consumption of RAM or lack of it. You can see it by using esxtop on any ESXi Host or you can also use the new ‘fling’ Visual esxtop. Also TPS was not necessarily useful if you did enable Large Pages (which increases the block size to 2MB from 4kB).
Personally I have never seen more than 4-6GB worth of TPS saving on any given hosts, irrespective of what the host was being used for. And that 6GB Max was on a host running Citrix XenDesktop 5.6 VDI environment. We can purchase servers with almost 2TB of RAM installed today, which I am sure will keep increasing as the technology used to make those memory chips becomes smarter and smaller. I read somewhere that an ‘organic’ chip is being developed, which would have a capacity of 2000GB in the size of teardrop. So all in all the death of TPS was inevitable as memory chips started to shrink and security aspect of sharing pages was going to be a hot contender. You can read more about the ‘default disabling’ of TPS on VMware official KB Article at