Share this:

Data storage is a burden that has plagued companies since its inception. How do I provision it, monitor it, make it highly available, protect it, make it performant, optimize it (dedupe & compression) and a host of other things? Additionally, how do I ensure others on my team or those less skilled can set up and manage storage effectively and consistently enough to adhere to our organization’s standards.

Over the years, new storage entrants have arrived, each offering a new or improved solution to some of the challenges mentioned above, often with a simpler approach than their predecessors. And a few have actually helped. Flash arrays have arguably made a greater impact toward simplifying storage than any previous trending the industry, but the reality is that this is still not enough.

We want to make the conversations around storage burdens go away. Vanish.
Our mission at Nebulon, at its core, is to bring the public cloud experience to on-premises infrastructure. When I provision storage resources within the public cloud, the storage discussion is moot. I specify my size and performance requirements, but I don’t care how it is configured or implemented in the back end. There is no concern around configuring multipathing, managing Fiber Channel switches, or performing LUN masking, the type of very specific functions that storage administrators are familiar with. Our goal is that this will no longer be a challenge that enterprises need to solve or account for when provisioning on-premises storage. The only way to achieve this, using the industry standard networking example, is by delivering storage and the enterprise data services natively in the servers you purchase today in an operating system agnostic way. But how can that be accomplished? Let’s dive in.

Today when you purchase a server, receive it, and plug it in, switches and routers are immediately discovered and addresses are assigned automatically using DHCP. The only decision that the purchase requires is what type of networking is needed, how many ports, and at what speeds. You don’t even need to think about the larger picture and furthermore it has become expected. Is the same level of simplicity and automation achievable with storage? Well, if the storage is local, it is much easier because it sits inside the server and is instantly accessible by the host’s OS or hypervisor and respective applications. With shared storage, it becomes much more complicated and expensive because you need to be concerned with how things get provisioned (LUN masking, ISCSI and/or FC connectivity), switch configuration, sizing, performance, availability, QoS, etc.

Hyperconverged Infrastructure solutions go a long way toward achieving this. They’re able to make storage mostly a non-issue by making enterprise data storage features locally and transparently available but, they require specialized drives (write intensive SSDs) and lack the flexibility to run any OS or application (think bare metal) without using specialized software, offering only the narrow band of virtualization. They’re not designed to natively support new (think future-proof) workload types. Furthermore, they use the hosts’ compute and memory resources to perform storage functions, reducing the user workload that can be hosted on the system. This translates to having to buy more servers and the additional licenses that go along with it.

Delivering storage natively in servers for any OS or application
What if every server had local, native access to enterprise-grade shared data services typically found in expensive arrays using only servers but without using any server resources, specialized fabrics, or software to provide them, regardless of the OS, hypervisor, or applications? This would get us closer to our comparable networking example, enabling enterprises like yours to focus on core applications vs managing the infrastructure that runs it. Additionally, what if we could use the same approach to simplify the deployment, centralize management, administration, and intelligent optimization of the entire on-premises server infrastructure? What if we could automate deploying application-specific best practices, running on any OS or Hypervisor (be it containerized, virtualized or bare metal) globally, from core datacenters to distributed edge offices? This is an approach where I can begin thinking realistically about automating once complex activities and ensure they are reliable and consistent. Doing so brings us one giant step closer to the vision of infrastructure as code.

The ultimate goal is to change storage from a burden to an opportunity. A good example is addressing ransomware recovery.
Things start getting interesting when you pair the intelligence and reach of a cloud-control plane with the inherent security and power of a dedicated data plane. The architecture of such an approach unlocks unprecedented value. The isolated nature of the data plane in the form of a dedicated PCIe device affords being able to uniquely provision and power the OS/Hypervisor boot volume, fenced off from the effects of a ransomware attack. No other combined server-storage solution can claim this today. Being isolated and “gapped” from the server LUNs where the ransomware is being executed allows recovery of both the OS AND application data — almost all solutions focus on the latter. With software-defined solutions it can be impossible to thwart as the software itself is running on the same physical disks as the OS where the ransomware is running and encrypting the back-end drives. And because all manageability resides in the cloud, it’s very difficult to become compromised by the same malware running rampant on-premises.

To learn more about smartInfrastructure, click here. To learn more about our 4 minute ransomware recovery solution, click here.

Share this:

Author Image

Siamak Nazari

CEO

previously 3PAR Chief Architect, then an HPE Fellow and Vice President for Hybrid IT Infrastructure.