Friends, Colleagues & Enterprise Customers,
18 months ago we set out to create a solution that would radically change the IT industry and I’m happy to announce that the day has come to officially introduce Cloud-Defined Storage to the world.
This announcement comes as no accident. The genesis of the idea of cloud-defined storage was actually triggered a few years ago after a few themes kept emerging out of conversations with multiple financial services CIOs. All had on-prem data that could not be moved to the cloud and every one of them at some point in time had come to me with the same request: why can’t I have mission critical storage for my apps right in my application server? But really their request boiled down to the following issues:
1.) How do I reduce my costs & simplify infrastructure footprint?
2.) How do I reduce my operational overhead and accelerate deployment of new services?
3.) How do I give application owners a self-service on-prem experience?
This was a challenging problem to solve, and we love challenges. The truth was that until cloud-defined storage, no simple solution was available—partially due to the fact that until a few years ago, the key building blocks simply did not exist. Without Cloud-Defined Storage, customers can either deploy hyperconverged infrastructure, which restrict hypervisor, OS choice and VM/application density, or are forced to stick with enterprise arrays which are expensive to buy & maintain.
This problem stuck with me and I realized that the only way to address the needs of the CIO’s I spoke with (and CIO’s all over the world) was to come up with an approach that required no software to be installed on the customers application server, and could be monitored, administered and automated via an API entirely from the cloud. Enter Cloud-Defined Storage, an on-premises, server-based, enterprise-class storage solution that leverages commodity server SSDs instead of expensive enterprise array SSDs, consumes no server CPU, memory or network resources, unlike HCI, and is defined and managed through the cloud.
How does it work exactly? Cloud-Defined Storage is made up of a combination of servers grouped to serve an application, each server is equipped with a Nebulon Services Processing Unit (SPU), and a secure multi-cloud based control plane called Nebulon ON. Think of the Nebulon SPU like an array controller that has been refactored into a PCIe based device in the style of a GPU. It runs a full stack of enterprise data services and transforms the internal SSD disk capacity of your favorite server to hypervisor, container and bare metal application agnostic block storage. Nebulon ON is built on a combination of AIOps, a distributed time series database capable of storing tens of billions of metrics data, centralized administration in the cloud, and an API-first approach to automation. Together these can provide real-time insights, provide always-up-to-date software, self-service infrastructure provisioning and storage operations-as-a-service.
So what does that mean? Ultimately, cloud-defined storage democratizes enterprise-class storage for application owners and infrastructure managers, making it simply a part of the data center fabric and easily consumable for those who need it. It’s pretty amazing and we could not be more excited to share with the world.
I can’t write about the launch without thanking my amazing team of #nebnerds. Cloud-Defined Storage is a fundamental shift in the industry that has already generated huge customer interest. I don’t know of any other group of people who could have gotten us to this point so quickly and successfully. We wouldn’t be where we are today without this wildly talented group of people I consider family, and I’m proud to be part of it.
One last thing, my team is putting on various events to give you more detail on Cloud-Defined Storage. Attend our US or EMEA customer webinars, visit our booth, demo or speaking session at the HPE Discover virtual event, or visit our resources page for more information on all things cloud-defined.