Home » Posts tagged 'XtremeIO'

Tag Archives: XtremeIO

Things that need to be changed on vSphere for Xtreme IO

Xtreme IO is the newest and fastest (well EMC say so) All Flash Array in the market. I have been running this in my “lab” running a POC which is quickly turning into a major VDI refresh for one of the clients. Having run throug the basics of creating storage and monitoring alerting etc in my previous posts., I am going to concentrate on what parameters we need to change in the vSphere world to ensure we get the best performance from Xtreme IO.

The parameters also depend on what version of ESXi you’re using, as Xtreme IO supports ESXi 4.1 + .

Without further delay, lets start.

Adjusting the HBA Queue Depth

We are going to sending a lot more IO through to the Xtreme IO array than you would to the traditional hybrid array. So we need to ensure that the HBA queue depth is allowing a lot more IO requests through.
You can find out the module by using the command

Step 1: esxcli system module list | grep ql (or lpfc for emulex)

Once you find out the module that is being used. The command below can be used to change the HBA queue depth on the server.

Qlogic – esxcli system module parameters set -p ql2xmaxdepth=256 -m qla2xxx (or whatever is the module from the command in Step 1.)

Emulex – esxcli system module parameters set -p lpfc0_lun_queue_depth=256 -m lpfc820 ( or whatever is the module from the command in Step 1)

Multi Pathing

If you are not going to use Powerpath, since its an active active X number of controllers array (yeah, i know its got 2 controllers per disk shelf so as of today you can scale upto 6 disk shelves per cluster so 12 controllers), we will be using Round Robin if using NMP.

The engineers who work with Xtreme IO recommend that the default number of iops be changed from 1000 to 1, yes “ONE”. So essentially you are sending an IO request to each controller in the cluster. I haven’t really seen any improvement in the performance by doing so but it is only a recommendation at the end of the day. If you see that you are not going to achieve any significant performance by doing so, the onus is on you to make that decision.

First, lets get all the volumes that’ve been configured on Xtreme IO.

esxcli storage nmp path list | grep XtremeIO

this will give you the naa.id of all the volumes that are running on XtremeIO.

Now lets set the policy to RR for those volumes.

esxcli storage nmp device set — device <naa.id> -psp VMW_PSP_RR (5.x)
esxcli nmp device setpolicy — device <naa.id > –psp VMW_PSP_RR (4.1)

You can also set the default path selection policy for any storage in 5.x by identifying the SATP and modifying it with the command

esxcli storage nmp satp set –default-psp=VMW_PSP_RR —satp =<your_SATP_name>

To set the number of IOs to 1 in RR,

esxcli storage nmp psp roundrobin deviceconfig set -d <naa.id> –iops 1 –type iops (5.x)

esxcli nmp roundrobin setconfig –device=<naa.id> –iops=1 (4.1)

Of course if you dont want to go change all of this, you can still use Powerpath.

Host Parameters to Change

For best performance we also need to set a couple of disk parameters. You can do this via GUI or the easier way via CLI (preferred).

Using GUI, set the following parameters Disk.SchedNumReqOutstanding to 256 & Disk.SchedQuantum to 64

Note: If you have non Xtreme IO volumes on these hosts, they may lead to over stress on the controllers and cause performance degradation while communicating with them.

Using Command line in 4.1, set the parameters using

esxcfg-advcfg -s 64 /Disk /SchedQuantum
esxcfg-advcfg -s 256 /Disk /SchedNumReqOutstanding

to query that its been set correctly, use

esxcfg-advcfg -g /Disk /SchedQuantum
esxcfg-advcfg -g /Disk /SchedNumReqOutstanding

You should also change the Disk.DiskMaxIOSize from the default of 32767 to 4096. This is because XtremeIO reads and writes by default in 4k chunks and thats how it gets the awesome deduplication ratio.

In ESXi 5.0/5.1 you can set the SchedNumReqOutstanding by using

esxcli storage core device set -d <naa.id> -O 256

In vSphere 5.5 you can set this paramter on each volume individually instead of configuring on per host.

vCenter Server Parameters

Depending on the number of xBricks that are configured per cluster, the vCenter server parameter
config.vpxd.ResourceManager.maxCostPerHost needs to be changed. This adjusts the maximum number of full cloning operations.

One xBrick Cluster – 8 (default)
Two xBrick Cluster – 16
Four xBrick Cluster – 32

Thats the end of this post. Please feel free to correct me if I’ve got any commands wrong.


Recommendation (as per AusFestivus’ comment):  EMC recommend that PP be used for best performance. But it always comes down to the cost constraints and how much the client wants to spend.  In my opinion, PP is more like “nice to have for best performance without tinkering”. But if you can keep tinkering and changing things to get the best performance out, you can do without PP.

Xtreme IO – Part 2 – Monitoring, Alerting and Security

This is part 2 of the 2 part Xtreme IO Blog post. You can find the first one here.

We will cover the basics of Monitoring and Security in Xtreme IO in this post. Please remember this is not a deep dive of the newest AFA. You should still consult the Official EMC product Documentation for up to date information.


XMS can be locked down to use either local accounts or ldap authenticated accounts. There are default accounts that are pre-configured on the XIO. However, it is possible to change the default passwords of the root, IPMI, tech and admin accounts.

There are 4 user roles that are available on the XMS.

  • Read-Only
  • Technician – Only EMC Technician should use this
  • Administrator – All access
  • Configuration – Cant edit, delete or add users


LDAP Authentication

The XtremIO Storage Array supports LDAP users’ authentication. Once configured for LDAP authentication, the XMS redirects users’ authentication to the configured LDAP for Active Directory (AD) servers and allows access to authenticated users only. Users’ XMS permissions are defined, based on a mapping between the users’ LDAP/AD groups and XMS roles.





The XMS Server LDAP Configuration feature allows using a single or multiple servers for the external users’ authentication for their login to the XMS server.

The LDAP operation is performed once when logging with external user credentials to an XMS server. The XMS server operates as an LDAP client and connects to an LDAP service, running on an external server. The LDAP Search is performed, using the pre-configured LDAP Configuration profile and the external user login credentials.

If the authentication is successful, the external user logs in to the XMS server and accesses the full or limited XMS server functionality (according to the XMS Role that was assigned to the AD user’s Group). The external user’s credentials are saved in the XMS Cache and a new user profile is created in the XMS User Administration configuration. From that point, the external user authentication is performed internally by the XMS server, without connecting to an external server. The XMS server will re-perform the LDAP Search only after the LDAP Configuration cache expires (cache expiration default value is 24 hours) or at the next successful external user login if the external user credentials were removed from the XMS Server User Administration manually.

LDAP user authentication can be configured and managed via either GUI or CLI.


Monitoring can be done from both a physical level and a logical level using the new Xtreme IO (XIO) Management Server (called XMS hereafter). In the current environment, I only have one xBrick for testing. So my XMS is only managing a cluster of 1 xBrick. At this point in time, a single XMS can only manage one cluster (although this might change in the next few code revisions) with a maximum of 8 xBricks. The unofficial word from my colleague in EMC is that this will be updated to support upto 16 xBricks. I have deployed the XMS as a VM afterall why would anyone want a physical server these days except to run ESXi.. right ?

Physical Monitoring

Monitoring on the physical devices in the XIO cluster is very easy. Click on the “Hardware” link in the application and it will be show all the physical components of the cluster (including the infiniband switches) but since I only have one xBrick, that is all thats shown.
Hover the mouse over the components and the health status of that component will be shown. This goes down the level of each disk in the 25 SSD DAE and also the disks in the controllers. So all aspects can be seen either wholistically or individually.






We can also check the back side of the unit including the cabling between various components. If we have an Infiniband switch, we can also check the cabling between the controllers and the infiniband switches.



That takes care of the physical monitoring of the components.

Alerts & Events

To look at the alerts and events on the XIO, click on the Alerts & Events link. This will show us all the alerts that are currently unacknowledged on the XIO and also the various events that happened. We can clear the logs if required when diagnosing any problem if it does get filled up.






Log Management

It is possible to use SMTP, SNMP or syslog to provide alerting and log management. We can do this in Administration tab, under Notification.

To configure SMTP, we need to enter the following details (Select SMTP) and click Apply






To configure SNMP, enter the Community name and server details and click Apply.






To configure Syslog, enter the syslog server details and click Apply.






This concludes my 2 part Introduction to Xtreme IO. Thank you for reading.


Check Out koodzo.com!