A winning formula
- 31 May, 2012 12:27
When you think of Formula One (F1) racing you may well think glitz, glamour, and money. Lots and lots of money. And while you’d be right, the sport, for all its A-List buzz and appeal to the world’s jet set, hides a dark secret: It’s really run by nerds. IT nerds. If an army can be said to march on its stomach, F1 without doubt races on its IT. Put simply, without information technology the sport simply would not exist.
Every step of the way — from car design through to parts testing, manufacturing and construction through to driver training, to garage set-up, testing and race day — F1 teams win or lose on the strength of their IT. Computerworld Australia took an exclusive look behind the scenes at the technology which enables the Caterham F1 Team — formerly Team Lotus — to keep up with the big boys and their very, very fast toys.
Before a single lap of a racetrack can run, each team must design, build and test a new race car every single year in order to comply with ever-changing rules and regulations set by Federation Internationale de l’Automobile — better known as the FIA. While the thought of designing a race car conjures up images of whiteboards and endless pages of blueprints, the reality is that the complexity of F1 cars means that serious computational power — typically in the form of a high performance computing (HPC) cluster — is needed.
Initially Caterham began designing its 2012 car using a Cloud-based HPC environment provided by Cambridge University. Through its technology and sponsorship partnership with Dell, the team later migrated to its own HPC platform based on 186 Dell PowerEdge M610 blade servers, middleware from Platform Computing and ISV applications for tasks such as wind tunnel simulation. According to the team, the HPC works nearly continuously simulating aerodynamics and informing the design the cars.
Being a smaller team with a limited budget Caterham also used the HPC applications such as the initial car design via the CATIA CAD package, then the simulated stress-testing of components in a virtual environment to save on the costs of manufacturing and testing in the real world.
“The idea is that instead of designing a wishbone, for example, then testing it to see if it breaks, you make four or give versions which you test, then get your prime out of that and you manufacture that,” Richard St. Clair Quentin, commercial manager at Caterham, explains.
“If it is aerodynamic component then it goes through CD Adapco STAR CCM+. That is a wind tunnel in a box. You can virtualise every molecule of air and do some very clever stuff like simulate cars overtaking each other, clean air, dirty air — all that sort of stuff. We also run heavily on CFD (computational fluid dynamics)… which we run on Dell Wintel chipsets to simulate the movements of fluids through radiators. It is a very clever piece of kit.”
IT on the move
Once the car moves from the virtual to the physical, the IT doesn’t stay behind. In fact it’s right there side by side with the car on the track. Not that you’d want to tell the team’s drivers, but the F1 car itself could well be described as a computer on wheels. It has plenty of on-board storage, plus a massive 150-strong network of sensors allowing for a some 800 channels of data to stream off the car onto the team’s fleet of virtual servers and laptops. With this data Caterham’s army of engineers can see just how every component of the car is performing.
As Antony Smith, senior IT support engineer at Caterham explains it, the team collects as much as 30GB of data, or about 15GB per car, during the course of a four-day race weekend. According to Smith, the 30GB collected is actually a 50 per cent increase on 2011. “That’s because we have more information — we have KERS (kinetic energy recovery system) this year and a new data logger which logs at a higher speed,” he says. “[The team] is always trying to get more data.”
On-track computer power is essential, but because the team has to set up and break down its operations across twenty different race tracks in as many different countries this year, it also has to be highly portable.
Smith says he relies on a ‘data centre on wheels’ made up of three physical hosts which run about 25 virtual machines.
“We can afford for one of those machines to fail and still run at full strength,” he says. “We have some custom apps running on them and lots of data analysis applications. We have people back at the factory who can access these virtual machines and work on the data in real time without having to send the data back. The data sets are huge — 2GB for a car for a run.”
Not only is having a mobile data centre on track essential for tweaking the performance of the car and monitoring it during races and testing, it is also essential in keeping the very high costs of running an F1 team down.
“Weight for us is an issue as we have to fly this stuff all around the world. We have to pay $250 a kilo for it,” he says. “If we can go from 25 physical machines and take that virtual and onto three physical machines and two SANs, then that is a massive saving for us. The double effect of that is that the power requirement drops which then affects the number of UPSes we need…”
Given the constant moving, vibrations and air temperatures which can climb into the 30s, Caterham is also heavily reliant on its global support partnership with Dell. Further, with F1 regulations limiting team numbers to just 42 people on the ground, Smith effectively shoulders the burden of making sure the team’s IT works alone.
“We had a switch fail in transit when we got [to Melbourne] and we got a replacement delivered to the team’s hotel in about three hours,” Smith says of the relationship. “We warn them where we are going to be and they make sure they have the stock locally held. If anything fails they can replace it within four hours.”
Mobility, DR, Cloud
The front end of Caterham’s track-side IT is also highly reliant on mobile devices. In-garage monitors allow access to telemetry, video feeds, and lap times. And, like just about any other industry you’d care to mention, tablet PCs are making inroads.
“In the garage there are multiple screens people can look at but we’ve changed that so that instead of having to look at the screen they can now have that on their laptops. The drivers also need them as well when they are sitting in the garage so they can have their laps times.” The drivers will definitely have tablets – very soon,” Smith says. “We were actually hoping that it would be for [the Melbourne] race. We have just got some of the Latitude STs… and they will make it much cleaner and neater in the garage. The driver will also be able to select what he wants. At the moment there is a screen and a remote control but it is not very nice and not very easy.”
Looking further ahead, Smith says that if he were gifted a cool million dollars to invest in his IT, he’d spend it where it really counts: getting greater reliability and portable compute power to the team.
“We are so limited on space and what we can spend, so we have everything built to the bare minimum to do [the job] reliably,” he says. “We are well placed with what we have got, but the next stage would be more virtual machines, more hosts… splitting everything up for better DR and that is really it.
“The biggest improvements we can make now are for what happens when things go wrong — when we lose power or can‘t access things. You don’t ever want to get to the point where you can’t have the car leave the garage as your servers aren’t working. I have known that to happen and it is not a very nice feeling.”
While these days it seems like there isn’t a problem Cloud computing can’t fix, but Smith says he’d rather keep his applications local for the greater reliability it provides.
“It isn’t the distances and the [latency] — it’s that we rely on one piece of fibre coming into the circuit,” he says. “If you put everything in the Cloud and that fibre dies, like it has done occasionally, you’re in trouble.”