Google-infused storage startup Cohesity reveals itself

Mohit Aron has a tough act to follow: His previous startup, Nutanix, may be on the cusp of filing for an IPO that values the hyperconverged infrastructure company at $2.5 billion. But Aron is off to a good start with his new venture, Cohesity, which this week emerges from stealth mode with $70 million in venture funding, reference-able customers such as Tribune Media, and a focus on a potentially big market in converging the secondary storage that houses so much DevOps, data protection, analytics and other unstructured data.

Part of Cohesity's attraction to investors and early customers is its rich Google pedigree: Aron worked on the Google File System that the search giant relies on for core data storage and access, and about a quarter of the 30 engineers on his 50-person team come from Google as well. What's more, Google Ventures is among Cohesity's backers (at least Google makes some money off its ex-employees' efforts this way, the 41-year-old entrepreneur quips). Google, which has gained a reputation for building its own infrastructure technology, isn't using the startup's gear yet, but Aron says maybe someday...

I spoke with the computer science Ph.D.-wielding CEO earlier this week to learn about how the idea for Cohesity was hatched and where the Santa Clara company is headed. Here's an edited transcript of that discussion:

(MORE: 10 hot enterprise storage companies to watch)

 

Tell me the story of how Cohesity started up.

I spent more than 3 years at Nutanix: the technology was mature, and hyperconvergence was already taking over the world. But there was this one problem that I saw: Hyperconvergence applied to primary storage for basically, virtualization environments. But the bulk of the data actually sits in secondary storage, which we are redefining to be not just data protection but all kinds of storage involved in applications that aren't mission critical [and handled in primary storage] including data protection, test and development, and analytics. So I saw a whole bunch of problems in secondary storage that could benefit from a different form of convergence. I left Nutanix in early 2013, thought over how best to fix the problem in secondary storage and came up with the idea for Cohesity, which was incorporated in the summer of 2013.

So why didn't you just try to stretch what Nutanix was doing to address the secondary storage problem?

Our vision, and this spans my experience building storage systems for the past 10 to 15 years, is that the data center consists of two kinds of storage: primary [the small tip of the iceberg above the water] and secondary [the bigger chunk below]. And when you address one aspect of storage then you are focused on the value-add that applies in that. In primary storage what's more important for customers is stuff like high performance and strict SLAs. So systems get architected for those purposes. Whereas with secondary storage, it should really be separate. Some people talk about converged primary and secondary storage, but in my mind that doesn't make sense. If you have a bug in that system it's not only going to take down your primary storage but also your secondary storage. So secondary storage is really separate and the workflows it addresses are separate. Just look at data protection: what kind of environments can you back up and how often? How much can you scale? The scalability you require is much more general purpose than in a virtualization environment. The solution I implemented in Nutanix would work very well when you would do file I/O but would not scale very well when you do name space operations like creating or deleting files. Our vision now is to converge all the secondary workflows into one infinitely scalable platform. [Aron added that while Nutanix is a mature company and his new one is not, the time could be right at some point for the two to partner.]

You mention your background working on the Google File System, and Cohesity says its Data Platform uses a Google-like, web-scale architecture. Can you elaborate on how working on the Google File System has informed your ventures since then?

As I graduated with my Ph.D [in computer science from Rice University] I worked in a scale-out company called Zambeel in the early 2000s, and the architects had put in assumptions that if something failed then that something would probably come back up in a few minutes. When I worked at Google I saw a different view of the world. I saw a world where the smallest systems comprised 5,000 to 10,000 server nodes back when Google had millions rather than gazillions like now. When you're talking about that scale you cannot babysit these systems. When something goes down it will probably stay down for an extended period of time and there is no hope that an admin will come along and have time to fix it. One of the ways in which the Google File System was different in terms of interruptions is that it said hey, if any component fails and stays down for an extended period of time, you can design around that so that the system can heal, almost like if an organ of the body is going to die and you work around it, not waiting for the doctor to come and implant a second organ. That is one philosophy on which the Google File System works and has carried over to the systems I've worked on henceforth.

Another thing is a lot of systems, going back to the first company I joined, they used to have a database sitting on the side that all the transactions went through and the company used to claim scalability. But the reality was that this database was a bottleneck. So it became an exercise in making it work on the most powerful piece of hardware that we could, but eventually the scalability is limited by that. The philosophy behind the Google File System is that there's no single bottleneck [early versions did have a unique master but later versions eliminated that].

This stuff is not taught in textbooks. So unfortunately, people who come out of Stanford or [UC] Berkeley and feel they can build a cheaper system, they're in for a

surprise, and end up building something like my team did at my first company. Building a better system is an art that's only learned by doing it at a company like Google and that's what my team has and that's what we're building.

How can you claim your product is infinitely scalable? That's seems like an outrageous claim.

Obviously we can never test infinite scalability. But I can tell you that by design there's nothing in the system that's limiting your growth. You can extrapolate that by looking at the design. The Google File System used to scale to 10,000 nodes and even more, but beyond that most enterprises don't care. But if they want more, we can give the more.

What does your product, the Cohesity Data Platform, actually consist of?

Our building block is a 2U appliance with 4 server nodes in it (though you can get them with 3 nodes), each of which has a dual 10Gbps network connection. Cumulatively, the storage on that clustered system is 96TB of hard drives and 6TB of SSD storage. Software comes integrated with the system. [While the early access program is focused mainly on backups of unstructured data in VMware environments, support for structured data in Oracle environments should be available by the time of general availability in Q3 or Q4, Aron says. Down the road, look for easier integration with various network devices as well, he says.]

What might your system replace?

If you look at secondary storage today there is a lot of fragmentation. We have test and development, data protection and analytics environments floating around. Even within data protection you have workflows like backup software, storage, tape/archival and cloud storage, all catered to by different vendors. Our vision is to converge these workflows onto one platform, and when you accomplish that you can see your sprawl going away. In the last 10 years, whatever innovation has been done in secondary storage has really addressed just a point problem, like de-duplication or copy data management. I think we are the first to comprehensively look at this whole space. One side benefit of this is that all the data that sits in secondary storage is "dark". You have no insights into it. By virtue of the fact you're converging analytics on this we can light up your dark data. This solution is aimed at disrupting and displacing most if not all of these secondary storage products. Some we can partner with though. We come with integrated analytics but we also wish to expose our underlying distributed file system to others like [Hadoop Distributed File System].

Is primary and secondary storage typically found on different sorts of devices?

There's a hardware and software difference. For hardware, primary storage is for mission critical stuff, so you'll go buy an all-flash appliance for that to make sure you meet your SLAs. But for secondary that makes no sense, that would be extremely costly for data that you'll hardly ever touch. That can go on cold storage, like tape. If you try to mix this in one appliance the pricing becomes very strange.

Software-wise, that's where formats come in. Primary storage you keep the format of whatever application it resides in -- for example, if you're running virtualization you'd be storing things like VMDKs. But when you back up to a secondary device it depends on the backup software. Some keeps content in its own custom format so that it's not the same as in the primary. What we want to do is go away from all that, we don't want multiple formats floating around. Our backup software is going to keep content in the exact same format as it was on the primary. That's important because you can use this, for example, to create clones and run analytics off it. That's how the convergence comes in.

How does what Cohesity is doing differ from the sort of hyperconvergence defined by companies like Nutanix?

Hyperconvergence is well understood to apply to the convergence of [storage and compute] hardware only and it helps virtualization environments. This is a much more general form of convergence where you're not only converging hardware but also a bunch of secondary storage software workflows, such as test and dev, which run on one kind of hardware, and then analytics, which runs on another kind of software.

And is there a cloud component?

Of course. Think about data protection, which consists of multiple workflows itself. There is backup software, backup storage, the archival piece and the cloud piece. For all of these, people have point solutions today and have to go to different vendors to get those. So when we talk about the convergence of all secondary workflows, we are also talking about the convergence of the cloud. So on this one platform you will connect your cloud [in an encrypted fashion] and we will be able to migrate data disparately between the cloud and your on-premises hardware.

Can you give me a sense of the company's culture? I saw you once tweet about "celebrating the holidays the Nutanix way" by issuing iPad minis to all employees. Will Cohesity employees be getting Apple Watches?

Already done. Last winter, even though the Apple Watch wasn't already out, we gave a $400 Apple gift card to every employee to go buy an Apple Watch whenever it does come... I hope they come out with a nice new gizmo next winter that I can give to my employees... A few more things about the culture. I deeply believe in the philosophy of a consensus-based system; there's no "My way or the highway," though we do have leaders to intervene if no consensus is reached. We hire smart people and give them enough power to make decisions. I also believe that every leader should have an individual contribution component. I'm an engineer by profession and my individual contribution was driving the core archicture that our world-class engineering team is delivering on. That's where the respect comes from, you respect leaders who are with you in the trenches and don't just shout orders from their offices.

Storage startups are raking in hundreds of millions of dollars from venture funds. But really how open are companies to buying from storage startups these days?

As we saw with Nutanix, if the pain points are big, customers are willing to try out stuff. Think of it like this: They're spending millions on the storage budget. We have a customer on the east coast with a storage budget of like $100 million; now of that, they're more than willing to spend $100,000, which is what our appliance costs, to at least try it out. Especially when they see the potential to lower their storage budget by something like $20 million. [Aron says his company mainly deals with storage admins, CIOs and CFOs at customers].

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags business issuesGooglenetwork storagestartupsNutanix

More about AppleGoogleNutanixOracleRice UniversityTribune MediaZambeel

Show Comments
[]