Accept failure, but focus on recovery

Armando Fox believes that, if you can't build fail-proof systems, you should at least build systems that can recover so quickly that service blips become negligible. A Research Associate with the University of California Berkeley's Reliable, Adaptive Distributed systems laboratory (RAD Lab), Fox was one of the leads on the joint Berkeley/Stanford Recovery-Oriented Computing (ROC) Project that investigated techniques for building dependable Internet services that emphasized "recovery from failures rather than failure-avoidance."

Fox has since brought some of the ROC lessons forward into the RAD Lab, which was launched in 2005 with US$7.5 million in funding from Google, Microsoft and Sun. Affiliate members include IBM, HP, Nortel, NTT-MCL and Oracle. The RAD Lab is focused on problems that plague large Internet-based businesses because the environments represent an extreme, but Fox says the lessons learned should ultimately trickle down to enterprise users. Network World Editor-in-Chief John Dix asked Fox to explain the vision.

Let's start with a review of ROC. What was that all about?

The philosophy of the ROC Project was stuff happens. Despite our best efforts to design and debug these complicated Internet systems, they inevitably end up failing in ways we didn't expect. Hardware is not perfect. Software has bugs. Even really, really well tested software like Oracle, you find bugs in it after it's been out in the field. And, you know, humans are in charge of running these systems and sometimes they make mistakes.

So the ROC Project philosophy was, let's accept that those things are going to happen and start thinking about designing for fast recovery as opposed to designing to avoid failure, which is not really a realistic goal. One way to improve system availability is to never fail. But, another way to improve is to make recovery from failure so fast that the contribution to availability is negligible.

Why do you start off with the assumption that you can never build systems that won't fail?

Because we don't think we're smart enough to counter the last, what, 60 years of computer science history. There are a lot of people working on design by correctness and other techniques to improve systems and minimize bugs. And that's a good thing. But so far, despite our best efforts, I cannot think of a single computer system ever designed in which no bugs were ever found once it was in the field.

So, I suppose we could take the position that, somehow in the future that's all going to change. But we've been saying that for decades. And it's not that we're stupid, right? I mean, in terms of performance, storage density, network communications speeds, look what we've been able to do in 30 years. But then compare that with what have we have been able to do in terms of reliability. The complexity of these systems has gotten to the point where it's very difficult for any one individual to understand how one of these systems works.

Plus, market reality being what it is, it's not as if you'd polish the whole thing, deploy it and then leave it alone. Systems have to evolve. You add new features, get more users, scale your system up. All of those processes work counter to reliability. Some of the most reliable software out there is the software that runs the Space Shuttle, and ask those guys how they make changes in their software. They have to write thousands of pages of documentation and have hundreds of hours of design reviews before a single line of code gets touched. So they have super reliable software, but it comes at a price.

And the reality is most Internet companies can't pay that price. Amazon can't have hundreds of hours of design meetings before deciding whether it can roll out a new feature. So the ROC Project basically said, look, we need to find a way to deal with this issue in the context of what commercial realities are. Because, yes, these systems evolve rapidly. And, yes, that's bad for reliability. But that innovation is where a lot of the value of these systems comes from. And we're not going to, as academics, propose an approach to the problem that says, you can fix your systems, but at the cost of rapid innovation.

So, that was the philosophy of ROC. And we actually made a fair amount of progress in identifying a couple of things. We identified some specific techniques that could be built into software systems that would help recover from certain kinds of common problems, really fast. In fact, so fast that sometimes you might not even notice it, except a minor blip in performance. So, that was an important finding. And, those ideas are starting to find their way into some commercial products.

How about an example.

Sure. One idea we worked on was called micro rebooting. When you have a weird, unexpected, unrecoverable bug and don't know what else is wrong, you reboot your machine. Sometimes that's enough to fix the problem. But rebooting takes a long time. So, given that applications have evolved to this componentized architecture using things like Enterprise Java Beans (EJB), our idea was to apply this concept of rebooting to a small number of components at a time. So instead of rebooting the whole EJB server, which can take minutes, you micro reboot only the EJB components that appear to have been failing. So, you reset the thing that was failing but you do it at much lower cost, because you're only doing it to the EJB component you believe was the actual source of the problem.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about ACTAxiseBayEvolveGoogleHISHPIBM AustraliaJohn DixMicrosoftNetwork CommunicationsNortelNTT AustraliaOraclePLUSShuttleSpeedTellme NetworksYahoo

Show Comments
[]