Share this:

Depending on who you talk to, IT administrators can face any number of challenges as it relates to their on-premises data storage environment. Most, however, boil down to these three basics: Scale, Management, and Cost.

But as most know, these problems aren’t new. They are a set of issues that reared their ugly head throughout the evolution of external storage arrays and outlived the decades of innovation in the field of data storage. And today, over 40 years after the introduction of the first storage array, these issues still plague IT departments – and they may have become worse. With numerous advances in computing and with networking leap-frogging innovation for the datacenter through cloud-management, storage, not for the first nor the last time, is late to the party. Why?

Dynamic scaling of infrastructure is easy, scaling storage is not

If there is one thing that is certain in the age of the digital enterprise, it is uncertainty. Development of new business models and their implementation is constantly accelerating. This steadily increases the challenges and pressure on IT to react, adapt and support lines of businesses more quickly with the right-sized infrastructure and set them up for success.

Compute and networking have a proven track record of being able to dynamically scale up and down as needed, but it still is a burden to achieve the same elasticity with storage. At the core, compute and networking are used as ephemeral data processors while the primary purpose of storage is to, well, store data.

Scaling or upgrading one ephemeral technology to the next may be tricky, but since the standardization of protocols, fairly straight forward as data just passes through them. Altering storage, even when using standard interfaces, is a lot riskier. If you make a mistake when moving data within or between storage technologies, you jeopardize data and the survival of the business. Not to mention that data movements take time to complete. This makes infrastructure and architectural decisions for and around storage hard, especially since most array options aren’t cheap and the wrong choice may eat up your IT budget.

In an effort to make storage simpler, it became more complex

In order to quickly react to evolving business needs, storage architects are looking at storage solutions with minimal complexity in addition to the right performance and capacity scale. This noble effort sparked the adoption of scale-out storage technologies, and a migration away from scale-up storage arrays. CIOs no longer have to make a difficult buying decision for a system that is (hopefully 🤞) adequate to sustain the uncertain demands of a project over a period of 3-5 years, and architects are relieved of the pressure of precise forecasting.

The scale-out storage infrastructure is used to pool storage resources into one thing and promises dynamic allocation and release of storage resources for many projects and applications. It’s a “one size fits all” storage solution with a “single pane of glass” that uses modular building blocks to grow with the business without over-provisioning and unnecessary spend on unutilized resources.

But unfortunately, what enterprises are building very quickly becomes a “one size fits nothing” storage solution that is complex to operate and a “single pain in the 🙊”. The issue lies in the fact that it needs to work with 100’s or 1000’s of heterogenous workloads with non-uniform storage requirements instead of just one.

Simple interoperability, maintenance, performance and data protection issues of the past, where there was one system, running one application, managed by one person that you could call at any time, suddenly mutated to complex issues that require coordination between hundreds of people. Not to mention that there is now a plethora of shared-nothing applications on the market that don’t play nicely with shared storage systems.

AIOps and analytics from vendors don’t solve this fundamental issue. As an example, your AIOps tool may blacklist a particular firmware release for your array as it is incompatible with an operating system, but at the same time, another operating system that uses the same array is in desperate need for a fix of the same release – a classic deadlock situation in a shared storage architecture.

Simplicity of operations is important for most organizations and while the essence of the scale-out architecture is the right approach, the fundamental idea of a large, shared storage system breaks this notion. And at some point, infrastructure teams inevitably look to alternative approaches and move away from traditional arrays to combat the continuing growth of application diversity and data along with the crippling costs of enterprise storage.

Hyper-Converged took off, but left the customer behind

Scale-out sever-based storage as a cost-effective alternative to enterprise storage was the new way to go. Especially Hyper-Converged Infrastructure (HCI) took off and made management of infrastructure easy on day one.

The premise is easy. You have finite space and power in your datacenter, so you converge compute and storage into one, allowing greater density and use of commodity componentry. No longer are drive slots in your servers empty, but they are used as shared storage for applications on the same servers, enabled by software-defined storage (SDS) and improvements in networking bandwidth. As a result, you increased datacenter density and you have compartmentalized compute and storage into individual clusters, each managed by the application owner.

As much as HCI sounds simple, most new architectures are built to be simple on day one, but what about day 1825? Many that tried HCI came to the conclusion that it isn’t the holy grail that solves all of their challenges with infrastructure. Just the fact that storage services share the same CPU as applications ignites a wildfire of constraints, restrictions and maintenance issues.

Those that spend their fair share with traditional storage systems know that CPU is a scarce resource that is easily burned up by data services such as deduplication and compression – and they are in use if you want to make flash storage financially viable. But if these are burning the same CPU cores that power your application for which you purchase expensive CPU or (worse) core-based licenses, it is an issue. To run the same workload, application owners need to spend more on compute and application licenses that reverse the cost benefits from moving to server-based storage in the first place.

To be fair, this only applies to virtualized workloads. But wait, … what about bare-metal, non-virtualized or cloud-native workloads then? Easy, they won’t run on HCI or only with large overheads and instead use the classic external storage arrays. So HCI just created another island in the datacenter that requires special care.

This brings me to my last point. I can argue all day that none of these existing solutions make a strong argument for storage, but it is clearly not because of the lack of trying. Every storage vendor in the market is continuously working on solving infrastructure issues around the mentioned scale, management and cost issues. We’ve been at trade shows, absorbing the latest and greatest feature sets and innovation. But then we return to our workplace and discover we’re three years’ worth of updates away from making use of any of the advertised enhancements.

And it is not because we’re lazy or don’t want to update, it’s because we never got to updating the hundreds of individual systems. It’s like painting the Golden Gate bridge: By the time you’ve completed everything, you can start over again as you’re already out-of-date.

I’m certain that I have listed at least one pain point that you’re currently suffering in your IT infrastructure and if you want a sneak peek how the Nebulon Cloud-Defined Storage architecture will come to the rescue, get in touch with us: [email protected].

Share this:

Author Image

Martin Cooper

Sr. Director of Solution Architecture

Martin Cooper is an experienced technology leader with over 20 years’ experience as both a technology consumer and as a technology vendor working across a wide range verticals and markets globally. As a consumer, Martin held various roles in design and operations including Global Operations Director and Chief Technology Officer, at the global design consultancy Arup. On the vendor side, he worked both with established technology vendors and start-up companies setting up and globally leading solutions architecture teams and working with customers of the world’s leading brands.