VCE has only been in the hyperconverged appliance market since the February launch of its VxRail family, but President Chad Sakac says the company will soon be the No.1 player in that rapidly growing market. Sakac doesn’t lack for confidence, nor will his company – launched as a joint EMC/Cisco/VMware venture – lack for resources to back up his claims. VCE is now the converged infrastructure division of EMC and, if things go to plan, will soon be part of the merged Dell/EMC. That giant company, Sakac says, will boast a ‘superpower’ that gives it a huge advantage over rivals like Hewlett Packard Enterprise: Not being beholden to Wall Street, it can move customers more quickly to true utility models of IT.
In this interview with IDG Chief Content Officer John Gallant, Sakac discussed the roles converged and hyperconverged systems play in customers’ evolving data centers, and he shared his thoughts on the impact of the nearly finalized Dell/EMC deal. He also explained why Cisco – which sold off its stake in VCE some time ago – will always be a key component in the VCE portfolio.
You’re in both the converged and hyperconverged infrastructure markets. From a technical perspective what’s the difference between converged and hyperconverged. When would you apply converged infrastructure versus hyperconverged?
How they’re assembled and how their technology stacks work are quite different. However, what they represent for the customer is pretty similar. Both of them represent a path to simplify the way customers operate their infrastructure stacks so they can get out of the assembly business, get out of the build business and get into the buy and consume business. That allows them to take their dollars, time, people and resources and focus them on things that matter more to them. They are different formulations of the same core value proposition, which is stop building stuff that doesn’t differentiate you and is boring. Focus on consuming those things and take the resources that would otherwise have been spent on building, maintaining and updating [infrastructure] and use them for things that [drive] real innovation and business value for you.
+ MORE CEO INTERVIEWS: Tech QAs: The IDG Enterprise Interview Series +
Where they differ is in one fundamental way, which is that converged infrastructure is made out of traditional infrastructure style designs - externalized storage, blades, networking components - that are assembled, built, designed and, very critically, sustained as a system. When you buy a [VCE] Vblock or a VxBlock, you are out of the business of actually buying the servers. There’s a tendency for people to look at it and say: I see [Cisco] UCS and EMC XtremIO in there. But you’re no longer buying those, you’re no longer patching those, you’re no longer maintaining those, you’re no longer even managing those. You have a Vblock or a VxBlock that you’re using and all of the sub-ingredients fade to black.
Since it’s built out of traditional ingredients, it has certain characteristics that are very good. It can scale economically with a broad variety of CPU, memory and storage ratios. Since it’s built out of traditional data center ingredients, it’s very good at supporting workloads that need the most traditional data center behaviors. For example, a mission-critical Oracle transactional database depends on very specific behaviors. It needs to support T10 disk, which is this really obscure thing but is very important for customers that have large Oracle footprints where there is an end-to-end CRC [cyclic redundancy check] throughout the whole stack.
If the database is really big, a latent IO problem anywhere in the stack - not just the storage but from the host all the way through - is actually a really difficult problem. Conversely, you have an SAP landscape that consists of all these different modules that need to be replicated with zero RPO in a consistent way. In other words, it’s the most traditional of the mission-critical workloads. Since converged infrastructure is built out of traditional data center ingredients it does very well at that.
Conversely, hyperconverged infrastructure is built on a system design that uses software-defined storage. It uses industry standard off-the-shelf server componentry and it’s designed to start small and scale up. Since it’s designed to start small and scale up and since it’s completely software defined, it’s much easier to operate, update, scale, patch and manage. The downside of that - and these are statements of fact, I’m not stating it the way that you would in marketing – is the software-defined storage stacks today do not have all the data services for those classes of workloads I described earlier.
So these things can coexist in the same data center?
In fact, for many customers, not only can they, they should. For many customers, the simplest answer is start with hyperconverged. VxRail can start as low as $60K list and, at its largest scaling point, it’s got almost 2,000 CPU cores, more than 2 petabytes of DRAM, can run 3,000 VMs and almost 5 petabytes of all-flash [storage]. That could run the entirety of many companies. However, there are some customers that say: 'That covers 90 percent of my workloads but 10 percent of my workloads need these specific capabilities and data services'. If you ask the customer, not by count, but how important are those other workloads to them, they’d say: That Oracle relational database, if that thing goes bump in the night, that’s my entire business.
I want to spend a minute and make sure people understand your key product lines. What is the difference between Vblock and VxBlock?
It’s very simple. Vblock and VxBlock differ in one and only one important area, which is that the networking component inside of VxBlock can support [Cisco] ACI and [VMware] NSX. A Vblock does not. A Vblock uses the [Cisco] Nexus 1000V, which is a software switch, but otherwise they are exactly the same.
When would a customer pick one versus the other?
Sixty percent of customers today choose a VxBlock because it gives them the open choice to add ACI, NSX or any combination of the two down the road. If a customer standardized on the Nexus 1000V as their software switch inside our VMware footprint, then the answer is a Vblock.
You mentioned some aspects about VxRail but could you give the quick overview of VxRail?
It was launched on Feb. 16 of this year and it’s been a rocket ship. VxRail is the best hyperconverged infrastructure appliance for customers who have standardized on [VMware] vSphere. If VSphere is how you build your software-defined data center and you want a turnkey hyperconverged infrastructure appliance, VxRail is the one for you. VxRail is co-engineered and co-developed by VMware and by EMC. It is designed to start very small, but it’s designed to scale really, really big. It’s optimized for all-flash and it can run all sorts of workloads including mission-critical workloads, so long as those workloads don’t need those very traditional data services.
You described it as a rocket ship. What are you experiencing in the market with that product?
In just four months, we’ve secured 360-plus customers in 59 countries. That’s only four months for customers to move from evaluation to deployment. That’s a very short timeframe. We’re well north of what we originally modeled in the business plan and it’s also incredibly geographically dispersed. We’re seeing small and large customers, customers in China, Japan, India, France, England, United States, Canada, you name it. That’s impressive. We’ve sold more than 2,000 nodes. This is a hotly contested market so to have 2,000 nodes sold over the competition is an incredible achievement out of the gate.
Just for clarity, a node in that definition is what?
In VxRail, the appliance can have anything from one to four nodes in it. The initial appliance for VxRail has to have four nodes but then subsequently you can add one node at a time. It’s a unit of measure of scaling.
What I think is fundamentally different is that we don’t think the entire market can be served with a single product. There are wild differences between customers and to be able to serve the market as a whole you have to have a portfolio. Everyone else, they basically have hammer-and-nail syndrome. How has that been manifest? Let me give you a very specific example: Some customers say VMware is my standard and the best solution for me is one that leverages my existing standard. VxRail is engineered, it’s built hand in hand with VMware, its roadmap is always in 100 percent synchronicity with VMware. Its technology stack is designed to get the most leverage out of VMware’s integrated technologies like vSAN.
For a customer who says vSphere is my standard for how I deploy workloads within my enterprise - and that is the vast majority of the marketplace - VxRail and VxRail’s bigger brother, something called VxRack SDDC, is the best answer in the marketplace bar none. It performs the best, has the best ROI and has something that’s very important to customers, which is that there is no air gap between the hyperconverged infrastructure layer and VMware as a partner, from support, from roadmap, from engineering.
Conversely, some customers are not standardized on VMware. Maybe they’ve standardized for Hyper-V, KVM and OpenStack and they use some VMware on the side. VxRack FLEX is a hyperconverged appliance that is designed to scale very large. It can start as low as three nodes but it can scale up to a thousand nodes and it allows you to bring any persona you would like. In other words, if you want to bring VMware, great, if you want to bring KVM, great, if you want to bring Hyper-V and the Microsoft Azure stack, great. Is it as integrated with VMware? No. But it allows customers who want that flexibility to choose. You cannot build a product that optimizes for that first and second customer. If you’re a single-product company [you have to] try to convince that customer their strategy is wrong.
Over the years, there have been many, many times where people built just one thing that can support the entire universe. It sounds good because human beings like the idea of extreme simplicity. That’s our nature. The answer is that you want the maximum simplicity without actually hurting yourself. Going past that point, customers start to build solutions that are not optimized for them. Inevitably, anyone who has said there is one stack to rule them all, when they get sufficiently large and they encounter customers for which that stack doesn’t fit, they change their tune.
Do you envision ever selling hyperconvergence as a software-only solution?
In fact, some of it already is. Some customers want more optionality in how they build or design their systems. They’re thinking there’s some benefit to being able to sub-select componentry. In all hyperconverged appliances on the market, to my knowledge, the hardware is an industry-standard x86 server - very undifferentiated on its own. All the magic is in the software. However, what’s really valuable to the customer isn’t the software or the hardware, it’s the offer and the fact that it allows them to get out of the build business.
We offer the ingredients that power our appliances as software only. ScaleIO is available standalone. VSAN from VMware that powers VxRail is available as standalone software. The software that pulls it together for a management stack is in some cases available as software only. Is that really what you want? Do you have a hardware management team? That means you’re going to build, test, harmonize and that group will be responsible for the bare metal support for a hardware supply chain. Presumably you’ll be thinking about sparing and parts replacement and those sorts of things.
Look, if you are one of the hyper-scalers, if you are an Azure, Amazon, Google, Facebook, Apple, you have a team of people that that’s all that they do. They’re the bare metal, hardware-layer management team. They manage standards, APIs, parts, inventory, all of that sort of stuff. If you’re going to want to bring your own hardware to the equation, you’re going to need that function. Should you? The answer is: God, no. It provides no differentiation for you, none. The point is when you choose a hyperconverged appliance, you’re choosing to buy, not build, so that you should assess its value as a system.
So the software-only solution is one in search of a market because for most customers it’s just not applicable?
The market which it serves is not the generalized enterprise market.
Earlier this year you became the converged infrastructure arm of EMC. How does the Dell/EMC merger change things for you? For example, Dell sells converged infrastructure today. Will there continue to be solutions from the combined Dell/EMC and VCE? What should customers expect?
The deal is not closed and it’s not useful to speculate too much on what’s going to occur post-close. Everything is looking great and it looks like we’ll be a combined entity relatively soon. At that point there will be further clarity. But let me talk about what we already have stated publicly. What we’ve stated is that the converged platform business of the combined entity would continue to partner with Cisco as the core technology ingredient inside Vblock and VxBlock. The reason for that is pretty simple and straightforward: Customers are voting with their feet and they like that system design. The second thing is we’re going to continue to use Cisco universally as the networking componentry inside our hyperconverged portfolios. We’ve stated that publicly and customers can move forward with confidence knowing that in the future that’s not going to change.
The other thing we’ve stated publicly is that inside the hyperconverged infrastructure appliance business, a huge component of the system cost is actually the x86 server. While the software provides the bulk of the value, the cost of goods is very much wrapped up inside the hardware platform.
+ MORE ON NETWORK WORLD Hyperconverged infrastructure requires policy-based security +
A hypothetical example: Inside a VxRail appliance, and this would be analogous to anybody with a hyperconverged appliance, if the list price for an entry unit is $60K, it’s very likely that the cost of goods for hardware is somewhere around $30K. What that means is that for anybody to be a viable, at-scale player in the hyperconverged market - as hyperconverged goes from a $1 billion to $2 billion market to becoming a tens of billions of dollars market over the next few years - if you do not have access to a global, world-class, just-in-time supply chain of x86 componentry that can leverage everything that comes from the entire ecosystem quickly, you will not be able to compete effectively as a hyperconverged player. You will simply not be able to have a critical component that is inside the stack. What should customers expect inside our hyperconverged appliances? Would we be leveraging the Dell PowerEdge supply chain and innovation and speed and time to market? Heck yeah.
Just so I’m clear, will there continue to be Dell converged infrastructure solutions and VCE converged infrastructure solutions?
Ultimately, over time there would be a single combined entity. The idea of calling it Dell Products and EMC Products fades to black. I think the other question that you’re asking is, will some of those products that are currently Dell products continue?
I think it’s more of a branding and positioning issue, so will the converged and hyperconverged infrastructure solutions be VCE branded or Dell branded? What would people expect from that?
We’re working through those things. The VCE brand is a very strong brand. The VCE brand was originally born as a company brand. In other words, VCE, Inc., a company. But VCE really represents a product brand. It’s a brand promise that is pretty simple. It basically says if you’re at the point where you’ve chosen to no longer build what is not a differentiator for you and instead want to view that as a commodity you now buy and consume, we deliver a turnkey outcome. Ultimately I think that will continue to move forward for things that are inside the combined entity’s portfolio.
You talked about your position on the Cisco relationship but is there any risk that Cisco changes the relationship? They’re going to be competing head-on against Dell/EMC. They’re doing a lot of work with NetApp already. I see that IDC has them as the leader in the certified reference system. What if Cisco changes direction?
Is there a risk of that? The short answer is no, period, end of sentence. Let me explain. The first thing that’s important to understand is actually rooted in this idea of what are certified systems or reference architectures relative to a converged system? In a converged system, the entirety of the system is bought, assembled, designed, supported and lifecycled through a single entity. A reference architecture is where different componentry is brought together and assembled through distributors or resellers and then delivered as a combined set of parts to the customer. The real litmus test is what happens six months from now when there needs to be an update? Who will you turn to? In a converged system the answer would be VCE. In referenced architecture it would be Cisco, NetApp and whomever.
There is a very important distinction. In one case I’m buying a car. In the other case I’m assembling a car. I’m doing it with a distributor or a partner as my helper but I’m buying the components, not the car. Business people understand these are two very different value propositions. What was really interesting about that IDC study is that it highlighted how converged systems are growing and hyperconverged systems are growing even faster but reference architectures are actually declining. Customers are getting smart. They don’t want to build this anymore. A reference architecture is de-risk but build. There’s a validation of the work but the responsibility is still fundamentally yours. When you go to converged and hyperconverged, it’s our problem.
Not only do we have a great partnership with Cisco, not only do we have strategic alignment but very importantly, VCE, as a part of EMC and ultimately of Dell, is the single largest UCS customer in the world. We buy Cisco componentry, it goes into a factory and out the door comes Vblock and VxBlock products that we own and support. Let’s just say hypothetically Cisco says -- Forget it, I want to do something different and we want to compete directly. Do you think that they would say no to their single largest customer for all of UCS?
It seems unlikely.
It seems unlikely. And we would continue to go down that path regardless of what Cisco chooses to do. In other words, the risk is nonexistent because this is really an OEM structure. Above and beyond that, we have a partnership, we have alignment, we have more customers than anywhere else and, by the way, that business is growing like gangbusters. Those are all reinforcing points but the main thing is that the business structure means that we have responsibility for the customer. They’re an OEM supplier to us. We have longstanding support agreements and buying models that will allow us to continue to operate that model no matter what.
I spoke with Hewlett Packard Enterprise CEO Meg Whitman in the spring and a lot of what we talked about was their approach to converged and hyperconverged infrastructure, which is a big part of her strategy. What do you think of their whole Composable Infrastructure strategy they’re rolling out?
Our strategy and theirs are pretty divergent. We believe that no one has shrunk their way to success. Customers are telling us that as they are moving through this very disruptive time they want to have fewer technology partners not more. We’re entering into a period of consolidation which we’re going to lead. Inside that world, one element of the technology stack - one element - is server design. There are software changes that are going on; there are changes in abstraction models, containerization, etc. There are dramatic changes in sub-componentry, whether it’s flash, NAND, next-generation non-volatile memory, all that stuff. The one thing inside that path is server design. We went through an era where servers were towers and then towers became racks, which dominated for a really long time. Then blades became a huge growth factor. People were using lots of externalized storage and having extremely dense memory and compute designs where blades ruled the day. By the way, that’s why bladed designs win inside converged infrastructure, because they have the densest CPU/memory configurations and they attach to externalized storage inside a converged infrastructure design.
However, as we move to increasing use of hyperconverged designs, the server is the least interesting component. The more interesting component is the software stack that drives it. We’re seeing all of those hyperconverged systems be built on, once again, rack mount. Why is rack mount resurgent? In hyperconverged, there is no longer an externalized storage array. In other words, the server needs to have persistence. It needs to have dense flash, or for people who are still living in the previous generation, flash and magnetic hybrids. That’s a new server design point where you have modularity of compute, network fabric, memory persistence and longterm persistent storage.
Today, the primary use cases for that are where workloads are wildly dynamic, cloud native workloads, workloads that are extremely elastic, environments where the customers’ environments have wildly changing ratios of compute, network and storage over time. The composable server market is a fraction of the blade market, it’s a fraction of the rack mount market, it’s a fraction of the CI market, it’s a fraction of the ACI market. It’s much, much smaller. Over time, those workloads will become more dominant. The new workloads that are cloud native are going to grow in count. Will composable servers play a critical role inside that ecosystem? The answer is yeah. Is that where the innovation really lies or do you think that the innovation is actually inside Cloud Foundry? Do you think the innovation is in Mesos? Do you think the innovation is inside the next-generation software stacks that drive that infrastructure?
Composable systems will be part of the answer. You bet. Will we see composable server designs go from being a nit to becoming more than a nit? Yeah. But I think the bulk of the value will actually be in the software that runs on those stacks. The difference, of course, between HPE and Dell Technologies is not only will we have the best industry server platforms, composable and others, but we’ll also have the strongest software portfolio, not only in Dell/EMC, but also in VMware and Pivotal.
Long term, will customers simply move the bulk of their workloads into the cloud versus building their own private or hybrid clouds?
It’s a really good question. When I find a customer who says: We’re all in on public cloud, they expect me to push back on that. I actually ask: What percentage of your workloads currently run in the public cloud? Not many but that’s where we’re going to go. I say, fantastic. I’d encourage you to do that as quickly as you possibly can. What I’ve found is that customers don’t have a good grasp of where public cloud models are ideal and where they’re less ideal. They don’t have a good grasp on public cloud economic models, where they’re fantastic and where they’re poor.
The best cloud model is where you’re using SaaS. Certain workloads are moving to SaaS at the speed of light. Moving to a SaaS model is something where, after they’re done, customers say: Wow! That was really good. It is perhaps the ultimate manifestation of the buy versus build choice. The next thing they realize is where the public cloud IaaS is the strongest fit for them are for workloads that are extremely elastic, workloads that scale up, down, which is very common for cloud-native workloads, the application is designed to scale dynamically. Within the enterprise today, that’s a small subset of the workloads but it’s an important and growing set.
There are certain workloads that are economically not well suited to the cloud, if the workload is static. I’ll give you an example. S3 storage from AWS is about three times as expensive as a VMAX, which is not the least expensive storage platform in the market if you are going to run a transactional workload for three years. By the way, that’s not including ingress and egress costs. Customers get to a point where they’re more educated about where do I put workloads. Where do I put SaaS? Where do I put IaaS? Where do I put PaaS? Where do I put some workloads that are governed and workloads that are not governed? Will the public cloud play a more important role inside every enterprise? The answer is yes. We’re trying to support and drive that as quickly as we can and we’re trying to make things like Cloud Foundry be the best way to deploy those new applications on top of Azure, on top of AWS or on-premise.
However, there is clearly going to be an enormous amount of footprint where the customers realize -- I also need an on-premises part of Azure. I need an on-premises VMware-powered cloud. I need an on-premises cloud native optimized cloud stack. It will be both.
As you look out on the year ahead, what should folks expect from VCE?
They should expect an incredible torrent of customer focus and innovation. They should expect that our hyperconverged portfolio is going to rapidly grow and we anticipate that we will very quickly be the No.1 hyperconverged infrastructure player. They should expect that we’re going to leverage the Dell technology portfolio like crazy in hyperconverged and they should expect us to stay aligned with Cisco in terms of what we do in converged and blade-oriented converged system design. The last thing I would say is we will have a unique superpower inside the whole IT ecosystem and that superpower is pretty simple. I’ll ask it to you as a question, John. Do you think that tomorrow, more customers will have some form of utility economic model even for their on-premises IT than they do today?
The answer is obvious. Of course they would. Will it all be utility economic models? No. There are going to be some use cases where capital that depreciates is better than a utility but more customers than today will use utility models. Why doesn’t EMC do that a ton today? Why doesn’t HP do that a ton today? Why doesn’t IBM do that a ton today? Why doesn’t anybody do a ton of that model since clearly customers want to do more of that?
I’m assuming that’s a rhetorical question and you’re going to give me a great answer to that.
What is the best time of the year to buy something from HP or Cisco or EMC?
End of the quarter.
End of the quarter and if you have to pick a quarter where you want to get the best possible price, what quarter would it be.
The end of their fiscal year. Why is that the case?
Customers are trained for it.
Yes. Customers are trained for it and the vendors are publicly traded companies, which means they think about their cash flow not at all. They only think about revenue and bookings, earnings per share and their forward-looking forecast; is it up, flat or down? The reason they do that is they are trained by the public markets and those are the only three questions that matter every 90 days. This means that there are institutional forces that make it difficult for companies whose business models are built on non-annuity or non-utility economic models to make that transition. You see Microsoft doing it as they move to Office 365 and Windows as a subscription.
Satya Nadella - I have great admiration for him - is doing a fantastic job navigating that transition but they’re able to do it because Microsoft is a giant entity. In their earnings calls, analysts are asking him: How is that transition going as one business declines and the other one grows? If you look at IBM, the same thing is happening to them and the same thing is happening at HP. The same thing is happening to everybody.
Our superpower is we will be a giant with incredible reach, assets and people that will also be a private company. You can expect to see VCE’s converged platforms, blocks, racks and appliances and the turnkey IaaS, PaaS and data fabrics that run on them to be available in flexible economic choices because we, unlike others, will not be hung up on where the revenue books.
I’m going to push back on this one. You will be a private company, true, but you will also have very large payments to creditors in order to support this deal. It’s not as if you’re immune from economic pressures. They may be less visible to the market each quarter, but it’s not an entirely different situation.
The thing I would highlight is that as a private entity that absolutely needs to cover debt, the primary metric is cash flow, not in-quarter revenue. Utility economic models can be great for customers and they can be great for companies as long as they have a frame of mind that is longer than three months.
Your point is that that structure will allow you to have a clear path, a clearer strategic direction than someone who is buffeted by the winds of quarterly results?
There is no question. Just to be clear, there is no question that this transaction does involve financing. The thing I would highlight is that Dell is a strong company that generates their own free cash flow. EMC is a great business, lots of customers and great cash flow. It’s a very good thing to actually have debt that you are financing and paying off. In fact, we will be, in my opinion, in a very good position and I’m excited about it.