Home » Nutanix
Category Archives: Nutanix
Before I start, its been a while since my last post, mainly because I have been really busy with work and family. Now hopefully I will make it habit to post something useful once every few weeks.
Disclaimer: This is not the **official** recommendation from Nutanix on Cisco ACI. This is just something that I worked on for a client of ours and thought would be useful for anyone who might end up deploying Nutanix + Cisco ACI + new vCenter on Nutanix NDFS :)..
Problem Statement: Cisco ACI requires OOB access to vCenter to deploy the Cisco ACI networks as Portgroups in vCenter. Build vCenter on NDFS. NDFS needs 10 Gb fabric (ideally) from each node. All the uplinks in the leaf switch are controlled by Cisco ACI. But ACI needs vCenter to push out Management and VM Network VLANs.
Some of you might see this and go uh oh, but let me assure you, this also becomes a problem in non Nutanix environments as well; especially anyone using IP based storage and only have 2 x 10Gb adapters.
There are 2 ways in which we can take care of this.
Option 1: Deploy a vCenter in a management only cluster, which doesn’t depend on the Cisco ACI for Networking. (Needs seperate physical infrastructure for networking and for management cluster)
Option 2: Add another dual 10Gb Nic to each of the nodes. (becomes a lot more expensive when you think of tens of nodes x 4 10Gb adapters).
Both the above options are quite costly, be it from a networking physical infrastructure point of view or a management only cluster point of view.
So how do we go about solving this?
First post fighting the FUD for Nutanix. I saw an image on Twitter recently where there was FUD about disk failure and data loss when using Nutanix. At Nutanix, we have a lot of different hardware models and also for people who don’t keep up to technology, we OEM with Dell and Lenovo as well, who have specific models of thier own.
Even with the different hardware models, our software handles various levels of failure with ease. The data consistency and resiliency of the software is not dependent on the hardware it runs. Be it a disk failure or node failure or block failure. There is varying level of redundancy that handle all of the above.
My first post since joining Nutanix. Its been pretty hectic in my new role and I am loving it. There is so much to learn and there are so many awesome things coming up; its hard to keep abreast of everything.
One of the best reasons to choose Nutanix is CHOICE. Choice of running the Hypervisor you want not the one that you are forced to live with. For many people today the hypervisor of choice is still VMware ESXi and rightly so. But Microsoft hasn’t been quiet with Hyper-V. Hyper-V is almost at feature parity with ESXi now but is still considered second best. At Nutanix, we have our own hypervisor called Acropolis Hypervisor (What a great name), its based on KVM but has been through the hands of our very capable Product Development team, who have ironed out all the issues. We also strive to enable our customers to change hypervisors when they want to; be it from AHV to ESXi or Hyper-V to AHV. This blog post focuses on how to convert a Hyper-V UEFI VM such as Windows 2008 R2 or higher to traditional BIOS VM on AHV.