Share this:

As someone who has been directly involved with building and selling software-defined storage and Hyper-converged Infrastructure solutions in a previous life, I have seen first-hand the benefits that a server-based storage solution can bring. It is because of this that I am convinced that server-based storage has the potential to address key customer concerns centered around infrastructure cost, infrastructure security, and management complexity. But there is some question around whether existing solutions, like Hyper-converged Infrastructure, live up to the hype.

Hyper-converged Infrastructure Held So Much Promise, But Failed to Deliver
Hyper-converged Infrastructure sold the idea that IT organizations could deploy a cheaper and simpler infrastructure through use of standard server-hardware components and an easy setup and installation experience by enabling VM administrators to self-manage their application clusters. But this turned out to be not as simple as promised. In reality, customers relayed that these platforms were definitely not easier than the traditional 3-tier architecture and that the cost of acquisition actually, at times, increased operational complexity and cost. Let’s examine further.

  1. Hyper-converged Infrastructure Promised Reliable Performance
    Hyper-converged Infrastructure runs all services on the server, including storage services. This means the storage services and applications in Hyper-converged Infrastructure share the same CPU, memory and network resources. The storage and management services alone can consume up to 25% of the server resources (depending on which data services you enable) which should be reserved for applications. Additionally, with a Hyper-converged Infrastructure model, careful coordination is required to guarantee reliability and adequate performance. As it turns out, this isn’t easily doable as processes on the server fight over resources and admins need to closely watch what services they turn off and when.
  2. Hyper-converged Infrastructure Promised Simplified Management
    Independent of where storage services run, the basic principles for reliability, protection, and performance still require a certain skillset. With Hyper-converged Infrastructure, the application administrator is in charge of configuring and maintaining the storage services, a field of study that requires years to produce a community of experts dedicated to storage.The level of expertise doesn’t stop there. Experts that can effectively troubleshoot, identify and correct storage-related infrastructure issues are also required to become experts in the applications running along specialized storage on the server. The mix and variety of industry standard servers that run the storage software and the plethora of hardware configuration options with their interoperability matrices don’t make that easier.
  3. Hyper-converged Infrastructure Promised Easy High Availability
    With Hyper-converged Infrastructure, when the server running the storage services requires a reboot (patching or a software bug), storage services and customer data are temporarily not available from this server. During this time, new writes may come in and once the server is rebooted, these writes need to be re-synchronized with the server that was temporarily down. This makes software patching of the operating system a lot harder as it requires careful operation and coordination between apps and storage. For example, there is a need to drain and rebalance storage from servers that you maintain in order to minimize the risk of data loss while the server undergoes maintenance.
  4. Hyper-converged Infrastructure Promised Reduced Infrastructure Costs
    There are a few areas where Hyper-converged Infrastructure can be costly. The solution actually makes use of server-based storage media that drives down hardware cost. However, maintaining the drives, their firmware, and interoperability is a burden at scale as each server is managed and updated individually as a separate management domain. I mentioned earlier that because Hyper-converged Infrastructure runs all storage services within the server, consuming 25% of their server resources to run enterprise data services, which requires an additional node for every 2 purchased, equating to a 33% increase in infrastructure costs, and up to 25% increase power costs. This is especially worrisome for edge deployments where real estate is often limited. While this materially affects the hardware cost, if you look at your software licensing costs (that are often CPU-core based), the additional markup can be significant! Not to mention that Hyper-converged Infrastructure makes you run a hypervisor that requires licenses and support, even if your (bare-metal or containerized) applications don’t need it.
  5. Hyper-converged Infrastructure Promised to be Optimized for Modern Applications
    Hyper-converged Infrastructure today is a virtualization (VMware & KVM) technology and is limited to these platforms. This constrains the applicability of Hyper-converged Infrastructure and its optimizations across a customer’s IT infrastructure to just a few use cases. Meaning, a customer can’t use this approach for every application in their infrastructure and it becomes an island in the infrastructure that only covers one portion of it. Customers then still have to operate their legacy, external array for the other applications that don’t run on VMware or KVM.In addition, there is a new breed of applications that were developed for the cloud that operate at large scale and assume no hardware resiliency. They bring their own data redundancy and data optimizations with them, and they don’t like sharing. While Hyper-converged Infrastructure gives you one or two configuration options, none of them are optimized for applications that want direct attached storage and force consumption of double the capacity and associated resources needed.
  6. Hyper-converged Infrastructure Promised to be Optimal for Small and Large IT Organizations
    I kind of mentioned it earlier – Hyper-converged Infrastructure works well in small environments, not in large ones. Because of best practices, customers configure similar nodes in terms of vendor, generation, and configuration. That, and system scalability limits that every system in the datacenter suffers, makes customers prefer deploying new clusters over growing existing ones. Eventually, this leads to cluster sprawl and management silos – each cluster is managed and monitored separately in their dedicated administration consoles. Sure, there are cloud-analytics tools available for holistic reporting, but you still need to leverage the element managers of each system to make changes.
  7. Hyper-converged Infrastructure Promised to Eliminate Management Silos
    Management needs to be done individually by each Hyper-converged Infrastructure cluster. Each administered separately. Each updated separately. Each monitored separately. If a customer has 10 – 100 Hyper-converged Infrastructure clusters, they will have dedicated people that don’t do anything else other than patching and maintaining their infrastructure all year round. This quickly becomes an automation nightmare as each system needs to be queried and managed individually, each with a separate IP address, credentials, etc. Getting insights about this infrastructure and making informed changes is therefore hard. I.e. Simple questions like “Where in my infrastructure?” and “Do I have bottlenecks?” requires heavy work. And performing corrective actions is even harder.
  8. Bonus! Hyper-converged Infrastructure Promised to Better Recovery Capabilities 
    Most vendors in the industry offer a built-in security or recovery solution, but the reality is that not all are created equal. For HCI solutions, specifically, snapshotting is used to take a copy of the application data so that in the event of a ransomware attack virtual machines can be restored quickly. The critical missing pieces, however, are protecting the server’s operating system (OS), which is essential to full physical infrastructure recovery, and shielding the drives that store the snapshots from malicious software, e.g. ransomware. By failing to implement a zero-trust policy and protecting the OS along with the storage layer, enterprises can be stuck spending hours to weeks getting their infrastructure back to a fully operational state

These are just some of the reasons why IT Organizations have disqualified Hyper-converged Infrastructure for key application deployments. However, even though Hyper-converged Infrastructure has presented its own set of challenges, customers are still bought into the idea that the right server-based storage solution which eliminates the cost and restrictions can be a viable alternative to expensive external all-flash arrays. This is where Nebulon smartInfrastructure and the vision for self-service infrastructure come in. Learn why, here.

Nutanix Trade-in Program

Is your Nutanix system ready for a refresh? We’re running a program where for every single Nutanix server or server license “traded-in”, Nebulon will provide one Nebulon SPU in return. Find out more, here.

Share this:

Author Image

Tobias Flitsch

Principle Product Manager

Tobias Flitsch has been working in enterprise data storage for more than 10 years. As a former solution architect and technical product manager, Tobias focused on various different scale-up and scale-out file and object storage solutions, big data, and applied machine learning. At Nebulon, his product management focus is on understanding and solving customer challenges with large-scale infrastructure.