Home » Sizing

Category Archives: Sizing

New Generation Storage Arrays

I recently read a blog by Josh Odgers about the the requirement for hardware support specifically availability of the storage controllers or lack thereof (link: http://www.joshodgers.com/2014/10/31/hardware-support-contracts-why-24×7-4-hour-onsite-should-no-longer-be-required/). So I wanted to share my experience with storage controller availability and how modern storage systems provide availability as well as performance. I have used examples of system I have worked on extensively (XtremIO) and also other vendor technology that I read up on (EMC VMAX and Solidfire).

Nutanix have a very good technology of  “Shared Nothing” architecture. But not everyone uses Nutanix (no judgement here). I have also been told that companies mix and match vendors (whoever that is :P). Josh raises a couple of very good points with regards to the legacy storage architecture of having 2 storage controller processing various workloads in todays high performance low latency requirement world.
There are a few exceptions to the rule above, EMC VMAX, EMC XtremIO and Solidfire. All these Storage systems have more than 2 storage controllers. They all provide scale up and scale out architectures. (Note: If I have missed any other vendors with more than 2 storage controllers, let me know and I will include the same in the post). I think the new age storage systems should not be called SANs, because they are not similar to the age old architecture of SANs providing just shared storage. These days storage systems do some much more than just provide shared storage.
VMware acknowledge this and hence they are introducing vVols, which provide a software definition to the capabilities provided by the Storage Systems. Hyperconverged is easily the latest technology which in some cases is definitely superior to the legacy SANs, but its not doing to replace everything just yet.
Lets delve deeper into the process. Lets take the legacy storage architecture and see how it behaves with scale out /up and failure scenarios.
Lets say a new SAN has been provisioned for a project, which has a definite performance requirement and so has been commissioned with limited disk arrays. This is an active active array so both the controllers are equally used.

Storage 1

As you can see, the IOPS requirement is being met 100%, CPU and memory have an average utilisation of 20-25%. So, this carries on for a few months, when another project starts with more IOPS requirements and so more disk is added (traditional arrays: more IOPS = more disk).

Storage 2

As you can see, the average utilisation of the Storage controllers in the SAN  has spiked to about 45-50% on both the controllers. So after a few months, another project kicks off or the current project scope is expanded to include more workloads, you can see where this is going. Lets say that the controllers are not under stress and are happily peddling along at an average 70% utilisation.

BANG ! One of the storage controller goes down due to someone or something being wrong.

Storage 3

So whats happened here is, until the faulty part is replaced, the IOPS requirement can’t be met by the single surviving controller, thereby spiking the CPU and memory utilisation so high that processing anymore becomes impossible.

This is where the new age Storage Systems (see I am not calling them SANs anymore) have the upper hand. Let me explain how.

Lets take an EMC XtremIO for example, each XtremIO node consists of the following components (you can also read about XtremIO in my previous blog posts).

XtremIO is made up of 4 components, XBrick, Battery Backup Units, Storage Controllers and an Infiniband Switch. The Infiniband Switch is only used when there are more than 1 xbricks. Each xBrick node consists of the Disk Array unit (25 eMLC SSD) with a total of either 10 or 20 TB of usable storage. That is before all the dedpulication and compression algorithms kick in and make the usable space close to 70TB on the 10 TB cluster and 48 TB on the 20 TB xBrick.

 

Xio 1

You can’t add more disk to the node, if you want to add more disk you HAVE to buy another XtremIO node and add it to the XMS Cluster. When you add more than 1 node to the cluster, you also get an Infiniband switch through which all the storage controllers in the Storage System communicate.

Xio 2

The picture above shows the multiple controllers in the 2 node XtremIO cluster. (Picture from Jason Nash’s blog).  This can be scaled out to 6 node clusters and no limit of how many cluster you can deploy.

Each Storage Controller has dual 8 core CPUs and 256 GB of RAM. This by any means is a beast of a controller. And all the metadata of the system is stored in memory so there is no requirement ever to span the metadata into the SSDs. The traditional way of writing metadata is when the storage disks are expanded with multiple disk trays, the metadata is also written into the spinning disk, this not only results in the read or write of the metadata being slow, it also consumes an additional backend IO. When there is a requirement for thousands of IO, the system just goes into a deeper state of consuming more IOPS to read and write metadata.

So lets take the example from above, where during the second stage of the project lifecycle, more IOPS were required, if the space was a constraint, the additional xtremio node is going to double the amount of IOPS that will become available as well as providing an additional 70TB of logical capacity.

Storage 4

Even though there is still an effect on the surviving Storage Controller, the IOPS requirement is always met by the new age Storage Systems. This is partly due to the fact that there is a specific improvements made to the way metadata is accessed in these new age systems. Lets look at the way metadata is accessed in traditional systems.

Metadata 1

As you can see, meta data is not just in the controller memory but also dispersed across the spinning disks. Regardless of how fast spinning disk is, its always going to be slower  than getting metadata from the RAM.

Now lets look at how meta data is distributed in XtremIO.

Metadata 2

As storage requirements expand, more controllers are added in whose memory metadata is stored.

Other Storage Systems

If we move away from XtremIO and take the EMC VMAX as an example, each VMAX 40k can be scaled out unto 8 engines. Each of these engines has 24 cores of processing power and has 256 GB of RAM. It can be scaled up to about 2 TB of RAM and 192 cores of processing for all 8 engines.  There are a maximum of 124 FC front end ports across the 8 engines.

Another example of a very good storage system is SolidFire. Solidfire has scale out architecture across multiple nodes and scale up options for specific workloads. They start from about 64 GB RAM and end up all the way upto 256 GB of RAM.

So here we go, traditional SANs are few and far in between today. There are various kind of companies, who for various reasons use all kinds of vendors. While #Webscale is taking off quite well, Storage systems still have a place in the datacenter. And as long as there are storage companies start re-inventing storage systems, they will remain in the datacenter along side #Webscale.

PS: Before anyone says zoning is not mentioned, I will tackle how to zone in the next blog post or may be after I work out how to explain zoning. I am not usually involved in zoning but will find out and blog about it as well.

For actual performance white papers, please visit the appropriate vendor websites.

Storage Design with Flash Storage. One Big LUN vs multiple smaller LUNS.

Traditionally, when storage design was done for VMware Environments, a lot of criteria had to be considered. This included

  • Number of Drives
  • Speed of Drives
  • Number of IOPS per each drive
  • RAID penalty
  • Write Penalty
  • Read Penalty
  • Scalability of the Array

But with the advent of all flash arrays (XIO, Pure, Nimble, Violin etc etc) a lot of these parameter no longer constraint the storage design for VMware environments. Each of the AFA offerings have their own RAID kind of technology, which pretty much  guarantees a very high resiliency to failure and data loss. Also with the new kind of Flash drives introduced (eMLC from memory), the consumer level SSDs are no longer used in AFAs. So now that the physical limitations on the drives have been eradicated, lets look at the next steps.

Queue Depth: 

Queue depth is a very misleading constraint, there are queue depths at each level, LUN, Processor, Array. So each physical enitity(or not so physical for CNAs and LUN) has an individual queue depth. How do we address this short coming? If there is a lot of IO being thrown at the Array, if its not able to process it, the queue is going to fill up.

If the host parameters are not set properly, it will start to fill up the HBA queue depth across the multiple LUNs that it has access to. Some of these parameters can be changed to ensure that the ESXi vmkernel process does things differently when using AFAs.

I’ve previously mentioned some parameters that need to be changed for XtremIO. I guess the same would apply for all the AFAs out there. Using ESXi with AFA and not changing advanced parameters to take advantage of AFA is like, buying a Ferrari to drive in Melbourne CBD. It only proves that you are an idiot, restricted by the ‘speed limit’.

OK But what about LUNs:

Now to the original question, One Big LUN vs Multiple smaller LUNS. Each decision has its own advantages and disadvantages, for example, choosing one Big LUN can give cumulative IOPS available across multiple storage nodes in AFA. So if one node provides 250,000 IOPS (random workload 50% read), then adding another node to it will enhance it to 500,00 IOPS. That single node provides more IOPS than a fully scaled and filled VNX 7500. Thats a lot of horsepower if you ask me.

The same can be said for multiple smaller LUNS, each LUN created is spanned (atleast in XtremIO AFAIK) across all the available nodes in the cluster. So you would still get the benefit of insane amount of IOPS for each decision.

Other Considerations:

There are other considerations that you will need to take into account when designing storage for VMware. To start with, workload consideration is a good one. Depending on the workload thats consuming all of these resources, you might want to provide a single big LUN or the application architecture might force you to use multiple smaller LUNS. One of my customers’ SQL Team is convinced that even on AFA, the data and the log LUN have to be separated on ‘spindles’. I explained about the lack of spindles and the redundancy/resiliency/availability aspect of AFA. After a long discussion, it was agreed that there would still be multiple LUNs created but all of them on the same 2 XIO node array. Not across the other 2 x 2 node XIO arrays that are available.

What about DR/SRM:

DR/SRM strategy doesn’t need to change significantly for SRM. I have always believed in providing the optimal number of LUNs for SRM for a mixed workload. Some applications might require a separate LUN (for a vApp for example). While some are happy to co-exist. It also comes down to the application owners, some application owners are adamant that the workloads should be maintained seperately, while others are happy to co-exist on the same LUN as long as their RTO/RPO requirements are met.

So in short, the answer is ” IT DEPENDS“. But my vote goes to multiple medium sized LUNs (10-12TB) :). This will provide the advantages of both big and small LUNS.

Whats your say ?

I’d appreciate the comments about this in the blog rather than on twitter, but then again both are social media so doesn’t matter.

Koodzo

Check Out koodzo.com!

Categories

Archives