Wednesday 25 January 2017

Goodbye Silos – Hello Automated, Virtual Cloud Data Storage Environments

Modern day enterprises are often trapped between two worlds: 1) the evolving world of cloud data storage, agile development, and unlimited scalability with next generation data storage companies, and 2) the traditional world with fixed software and hardware architectures combined with rigid application silos.

The former approach of separating data storage into semi-automated application silos takes us back to pre-industrial era production methods. 

Today, many IT owners feel the pressure of switching to flexible, virtual data storage and cloud environments with leading data storage companies. The problem is, most critical business applications are found in legacy systems and can’t just be discarded. This dilemma is confounded by the need for scale, speed, and low-cost pressures. It later collides with customers expecting simple, instant, and seamless self-service experiences.

The traditional model of building dedicated infrastructure at every step in the value creation chain is no longer needed: planning, procurement, deployment, delivery, operations, and obsolescence. Because of this, its economic viability is disappearing.

Meeting the future needs of a data storage center can only be delivered with high automated standard platforms. These embrace all applications and allow for fast deployment across all data centers and clouds. The good news is, the software-defined data center has improved efficiencies and allowed for several emerging solutions to orchestrate applications and network connectivity.

Yet, to offer a seamless infrastructure experience, data needs to be orchestrated together with applications and networks. To offer tailored performance, isolation, security and protection with tightly fitted cost profiles for each individual application, the data infrastructure must be “molded” to specific business goals.

To fulfill our promise, we developed twogoals:
  1. We believe applications know what they want. This should automatically drive the infrastructure to serve their needs. To deliver optimal price and performance, we envisioned continuously balancing application intents with infrastructure capabilities.
  2. We believe the convenience of software-defined storage shouldn’t be burdened with uninspiring performance and latencies. Here we envisioned a “frictionless” data plane which can perform with leading traditional storage systems.
In short, we imagined a highly efficient data center which automatically composes itself as needed to meet the exact needs and intents of each application. This method is in line with modern automatic production methods. For example, the Tesla factory:

Goal #1 – Comprehensive End-to-End Automation

To achieve Goal #1, we built a new type of data infrastructure. This continuously and automatically composes itself through application intents:
  • Understands business intents (SLAs). This is done by capturing application needs (IOPs, availability, throughput, etc.) in application profiles, manifests, or templates.
  • Learns infrastructure capabilities. Learned through self-describing data nodes and observing operational constraints in the data center, such as network topology, latency profiles, availability zones, power domains, etc.
  • Contains a sophisticated Al-based policy engine. This continuously optimizes all elements in the underlying data fabric to deliver the best performance and price for each application, tenant, or user.
With this, we moved from producing rigid, isolated storage systems to delivering a continual stream of data infrastructure tailored to each application.

Goal #2 – Compelling Performance

To achieve Goal #2, we pushed data plane technology and implementation in a number of ways:
  • Created direct-on-drive log-structured key/value store to manage any underlying storage devices. All while getting the rest of the Linux kernel out of the way.
  • Use NVDIMMs or NVRAM for greater memory-speed write latencies to anticipate the era of persistent memories.
  • Use direct memory access (RDMA) protocols to go beyond the networking stack. As a member of the NVMe-OF norming committee, we’re bringing next-generation, low-latency IO protocols together with Intel, HPE, Micron, and Mellanox.
  • Contributed key parts to the Linux storage subsystem and optimized large parts of the Linus I/O path.
  • Invented a lock-less, low-latency distributed data coherence protocol.
In total, we created a flexible and efficient scale-out data platform which surpasses the performance of traditional scale-up storage arrays. This allows users to seamlessly move to the future with low-latency fabrics and storage media. 

The Result

The final result? The Datera application-defined automatic data infrastructure:
  • Delivers 25 GbE line-speed throughput and sub-milliseconds latencies in line with traditional storage systems.
  • Gives hybrid, all-flash or future persistent memory technologies a dynamic runtime choice. This eliminates extensive planning, deployment, and obsolescence cycles.
  • Creates data infrastructure when needed to scale based on the application’s needs. This eliminates slow manual provisioning, operations, and service cycles.
We imagined a completely new class of data infrastructure for the demands of a new era. In doing so, we succeeded in transforming the way data infrastructures are built and operated.We moved from a manual infrastructure model to an automated model focused on the application and user. All this with better performance and price elasticity. We’re on the path to reshape how IT owners work by gaining the freedom to focus on creating business value while our application-defined data infrastructure does all the hard work.

Now is the time to join the new data era in cloud data storage. When you enable your automatic data infrastructure with Datera, you’ll experience a profound difference in your own data-driven business. 

No comments:

Post a Comment