← Back to front page

The future of enterprise storage architectures

CirclesAlright,

VMworld 2013 Barcelona is finally over and we are all back at our normal routines. To understand what really happened in VMworld you have to look back to that week in Barcelona, once again, at first you are overwhelmed by all new ideas, solutions and hype… you feel like you need to implement every new thing you found on the booths and saw in presentations. Then you return to your work and you notice nothing has changed, your business has not stopped, you have not lose your job (hopefully, it is hard times we are living) and even thought there is so much you could do, you are doing pretty well as it is. That is good news. I think what you need to gather from meetings like this is fresh ideas, challenge your assumptions and filter (this is something you have to do a lot… and then some) all the hype and BS and after all that you have to hand pick the best ideas that will bring your company, your team, your colleagues and of course, you, the most advantage.

So, what was cool and where the hype was? Well, like we have been witnessing a little over a year now, a “software defined _____” is the next big thing, all the big player are buying every startup that has a “software defined” (not defying hopefully) in their product portfolio. And some of you might remember, before that it was BYOD and before that it was the Cloud itself. So here we go again, last time I talked about VXLAN (the article can be found in here: https://blog.ambientia.fi/2013/01/14/where-does-vxlan-belong/) and I was happy to see it to grow more mature and to realize that the STT (Stateless Transport Tunneling) had gained momentum as well. That’s good news to all of us. This time I would like to focus on storage and what is going on on that front at the moment. All I can say it is interesting as well. Hopefully you enjoy my high level overview of things to come.

Now you are guessing I’m going to talk about SDS (Software Defined Storage)? Well, actually, no, not really. Sorry. 🙂 …I will be talking about options how to accelerate and optimize your storage for performance, availability, DR and other features (such as snapshotting, cloning, thin provisioning etc).

So lets start with VMware, after all they were the host of this great happening. VMware brought a heavy hitting combo into the table: VSAN and Flash Read Cache. VSAN is going to be a perfect combo for SMB sector and branch offices. No need for dedicated SAN/NAS solution (well, you have to arrange your backup somehow). You can build a cluster of 8 hosts using a concept from object based storage where disks are attached in a JBOD (Just Bunch Of Disks) manner to each host and a software will take care of availability and performance (your VSAN disk groups will include a SSD device for write/read acceleration) of your data. A really nice concept. If you could add a few killer features (snapshots, backup, DR) and could scale beyond 8 hosts, this would be the solution of the year. For me the Flash Read Cache was more interesting, an option to accelerate reads within hosts makes a lot of sense (minimal latencies anyone?). Sadly my excitement was brought down by the immaturity of the new feature and the requirements FRC has (manual configuration, no write acceleration and Enterprise Plus license).

All the storage vendors were presenting their new and shiny SSD and PCIe flash solutions. Some of them had software available for hosts and even for guest OSs to take advantage of caching. So this is the big thing in storage. Caching in different levels, such as in guest, in host or in storage accelerators between storage and hosts, are the way to go for improving your storage performance (throughput MB/s, IOPS and/or latencies). And when you have an idea of what you should do and where, there is still plethora of options out there to achieve your goal. Lets take a look at what kind of architectural options there are available and what kind of benefits each of them can provide.

1. All the performance is within your storage system

This is the most common type of storage design. You have chosen your storage system based on the scalability the system can provide. You can improve the performance by adding disks, but to get the maximum performance out of the system, you might have to add more disks than you would need for the actual data. To prevent unnecessary storage, vendors have come up with different solution, such as all (or mixed) SSD disk shelfs, SSD-acceleration modules and memory (NVRAM) accelerator modules. This is fine until you ran out of CPU cycles on your filer. Few vendors have come up with a solution to scale seamlessly adding more computing nodes (such as NetApps Clustered OnTap, HP StoreVirtual and EMC silicon). The goal here is to dedicate different kind of resources to a certain task. We see more and more the bulk storage achieved by large SATA disks and performance is achieved by combining technologies presented above. This design is good for keeping the storage where it has always been and you don’t need any new tools to achieve the performance available at your storage system. A disadvantage is latencies the network presents no matter how fast is your storage system and interconnects. Of course this can be partially avoided by using fast interconnects such as 10Gbit (or 40Gbit) Ethernet or 16G Fibre Channel.

2. Adding a separate performance tier to your storage

Companies such as Avere (http://www.averesystems.com) provide storage accelerators that sits between your hosts and storage systems. These accelerators works as a buffer (cache) and can accelerate read and writes on all connected hosts and while doing so can really give a break on your storage systems and let those systems to do best what they are meant to do, to store data. And the nice thing is that you can still take advantage of all the unique features of your storage systems (DR, snapshots, backup etc). These systems can be blazing fast as you can find out from spec.org (http://spec.org/sfs2008/results/res2013q2/sfs2008-20130318-00218.html) and additional to the plain performance gains, they can provide some advanced features such as storage motion, single namespace (multiple storage systems presented as one for the hosts), online expansion (nodes & storage) etc. But of course you have to learn yet another system to provide storage services to your hosts and that I think is the main disadvantage of this kind of a design.

3. Adding caching features directly to hosts (Server-Side Scale Out)

This is what VMware has done with their FRC (mentioned earlier) and Pernixdata (http://www.pernixdata.com/) with their FVP. Contrary to FRC (at the time), Pernixdata FVP is a really interesting product, it provides caching for both reads and writes, and does it while providing redundancy (1+1 or 1+2) and ease of management, the only thing you need to do is to provision the caching devices per host and they are added automatically to a cluster and is available to all VMs on a host. This is interesting. Based on their performance testing, FVP can provide really low latencies and huge improvements in IO (remember a good SSD can achieve anything from 30k IOPS to 90k IOPS and with PCIe devices you can go as high as 600k IOPS!). And this acceleration is achieved on all of the host which has caching devices available. And best of all, this is done in a manner that does not affect your ability to VMotion VMs. Only downside here is the licensing requirements (you need Enterprise Plus from VMware and on top of that you need a license from Pernidata), not the cheapest combo if you ask me. But the concept is a killer. This definitely is something to think about.

So what did you get out of this? Hopefully bunch of new ideas how to design and arrange your current and future storage systems. Good luck hunting those IOPS and MB/s while maintaining control of your data!

Sincerely yours,
Matias

Please, leave us a message and we'll contact you.
You can also contact our Service Desk by phone +358 290 010 500 or email servicedesk@ambientia.fi.