Sunday 25 December 2016

Approaching the Billion IOP Datacenter

Are you still using traditional data storage systems? These systems are most often being pushed beyond their intended design. While users have always asked for more than just raw storage capacity, they’re now wanting multiple instances of terabytes, sub-microsecond latencies, and thousands of IOPs per deployment – all across multiple coexisting workloads.

Today, blogs and white papers are recognizing these needs and predicting the next generation datacenter to deliver seamless scalability. New products fill our imaginations with buzz-words and promises to solve the cloud problem with container-based, microservice-delivered systems on commodity software. However, in all of this, there still lies a major problem that no one is addressing.

How to approach the billion IOP datacenter.

We’ve entered a comfortable industry cadence going from kilobytes to megabytes, gigabytes, terabytes, and petabytes of storage, including going from kilobit to megabit and multi-gigabit of bandwidth.

But presently, no one has architected a data infrastructure that easily manages kilo-iops, mega-iops, and giga-iops.

Why is a system needed to deliver these types of capabilities? Answer: the industry’s cloud problem. Pressure continues to mount to save time and money with fewer employees and fewer resources to write, design, text, deploy, and scale applications up or down at fast speeds. Applications must peacefully live together without impeding on the other. All while being deployed in multiple frameworks (Docker, OpenStack, VMware, etc.) and multiple platforms (containers, bare-metal and VMs). These frameworks, platforms, and applications divide the datacenter and silos prevent both clear operations and efficient economics. Yet even now, the success experienced by the public cloud (Google, Azure, and Amazon Web Services) is posing a greater challenge to data infrastructure and data center management. The only way around this is having a universal data infrastructure to help consolidate the current mess.

To address this challenge, we’re using these key elements:

  • An elastic data and control plane
  • API-based operational model
  • Standard-based protocols
  • The power of NVDIMMs/NVRAM/NVMe (soon 3D XPoint)


Elastic Data and Control

So, how do we get an elastic control and data plane that can attach storage resources to numerous applications?

First, we created floating iSCSI initiator/target relationships allowing applications and their storage to move freely across storage endpoints. This allowed us to dissolve topological rigidity. As an application moves, we can drag its storage and manifest its endpoints on the right rack. Migrating apps can be served from many locations at the same time since we spread out all of the IOPs.

Next, we built an operational model to describe applications in terms of their service needs such as performance, resiliency, affinity, etc. During deployment, storage isn’t required to be handcrafted as LUNs on pre-set RAID levels and other legacy attributes which can be inflexible and cumbersome. With Datera, every volume has fluid characteristics from building up to tearing down.

Lastly, we don’t require deployment teams to spend time mapping elements of their storage system. Rather, we consolidate and deliver everything in one convenient architecture.

API-Based Operations Model

With Datera, developers can deploy storage without getting lost in the details. Simply describe your application needs (aka service levels) and roles (aka development, testing, production or QA, etc.), and then kick back and let Datera do the work for you.

Standards-Based Protocols

Datera is scalable and easy to use. It provides multi-tenant storage for containers, bare metal, VMs, etc. But others wonder, does it support every OS? When and where are drivers available for Linux, Windows, or even BSD?

No problem. If the OS supports iSCSI, Datera supports it. There’s no more hassle with proprietary drivers or client-side proxies. After all, who wants to track down a hundred instances of a driver?

The Power of NVDIMM/NVRAM/NVMe

Now, how do we reach gigiops? Reaching this level of performance is useless unless you resolve the three delivery challenges above.

After we capture a wide array of applications by their intent, fit a range of IOPs into one storage cluster, and have a control plane that makes configuration and re-configuration easy, we can add our last key ingredient – powerful NVDIMM and NVMe storage media to get us to gigiops.Datera can deliver high-performance and low-latency automatically to any application on any platform. But just wait until you see what we can do soon with 3D XPoint.

Let’s just say, Datera is the new easy button for your next generation datacenter. At Datera, we:

  • Made it to the billion IOP datacenter
  • Started with a hundred IOP disks
  • Then larger IOP SSDs
  • Then larger NVMe
  • Learned to scale better than the masses
  • Figured out the scalable architecture. While others made the mistake of proprietary drivers, we use standard iSCSI supported by Cinder.
  • Didn’t waste time trying to figure out where to put the control plane. We figured out how to scale and distribute it.


HOW?

  • No proprietary driver
  • Central control plane
  • Auto-tiering
  • Support for iSCSI and iSER (RDMA)
  • During 4k Random read, we can offer 150,000 IOPs per machine at 600MB/s (you’d need more than 6,000 machines for this!)
  • Our all flash array can offer you 500,000 IOPs (you’d need 2,000 machines for this!). This can be done in 57 racks if you fill your racks about 35U


WHAT?

  • Template based application deployment for VMs
  • Persistent container storage
  • Datera makes provisioning storage and standing up a large cluster simple and easy


WHO?

  • Datera – creator of Application-Driven Cloud Data Infrastructure


WHY?

  • Cloud carving for a few hundred IOP applications to multi-million IOP data jobs
  • Hosting providers have access to the cost of standard software hardware to scale their client’s needs without operational frustrations


CONCLUSION

  • If you’re interested in saving money and a way to “come home” from AWS or mirror your current AWS Deployment, we can provide elastic, inexpensive and scalable storage options. Others may too, but learn their limits first
  • Others may have the right parts to build the next generation datacenter, but only we know how to build the best data infrastructure for professional datacenter management

No comments:

Post a Comment