Serverless architecture delivers scale, savings for Freight Exchange

Despite a sharp increase in the use of its online platform, the logistics company’s CTO says that it still only requires two instances thanks to its extensive use of Amazon’s Lambda service

Online logistics company Freight Exchange is increasingly relying on serverless computing to drive its core platform, with a shift to serverless architecture delivering substantial savings according to Martyn Hann, the company’s chief technology officer.

Hann said that Freight Exchange’s mission is to give people a better way to manage freight — both people seeking to shift goods and transport companies.

“Our overall aim — the ‘one liner’ — is to fill the empty trucks, but really it’s about building a better way for people to manage freight, and that’s on both sides of the market,” he said.

The CTO said that the service is akin to an online marketplace for independent carriers and aggregator of services from major logistics companies.

“Ultimately from a shipper’s point of view, you’re coming on, you’re getting a quote or range of quotes straight away — which is guaranteed, you’re not waiting for people to get back to you — those prices are fixed up and you can go ahead and book it if you want to,” he explained.

The company runs exclusively Amazon Web Services’ public cloud. And increasingly it relies on AWS’s Lambda service: The cloud giant’s serverless compute service, which debuted in 2014 and allows functions to be run on demand without provisioning infrastructure ahead of time.

“We use it a lot,” Hann said. “Partly we had good timing — as we were growing and building our platform, that technology started becoming more mainstream.”

A key reason for the increasing use of Lambda has been that it makes sense for large parts of the company’s platform run on demand, the CTO said.

Freight Exchange integrates with a large number of carriers, from small operators up to large logistics companies such as Toll and TNT, in order to arrange shipping.

However, it only needs to interact with those company’s when someone makes a booking, Hann said.

“Given we’re primarily Australian-based at the moment, at night and most of the weekend, we don’t have any shipments to send across to them because no-one is booking freight at that time,” the CTO said.

The event-triggered approach of Lambda works well for the company because it can automatically scale as demand increases during peak periods and incur zero costs when no-one is making bookings.

“We also built our mobile app almost exclusively using it by having serverless API end points and functions behind them that take data in — store location data, respond to events, all that sort of thing — all of which is completely serverless,” the CTO said.

Freight Exchange is currently in the process of caring out pieces of its core Java-based platform and turning them into Lambda-based microservices.

“We do that mainly because it makes us inherently scalable – we don’t really have to worry about scaling anymore; it just takes care of that for us,” Hann said.

“We have these functions that do one thing and that’s all they do, and we can then call them in lots and lots of different ways depending on what we’re using them for.”

A basic example given by the CTO is code dealing with PDFs of labels that need to be sent to a carrier.

The labels can be sent in several different ways: Via email, uploaded on to Freight Exchange’s servers or possibly the server of a carrier via their API.

“That’s a bit of code that we access from lots and lots of different places,” Hann said.

That code has been stripped out and turned into a standalone function, so when the labels are ready, the function can be used to send them via the appropriate method for the relevant carrier

“The system that’s generating the document doesn’t need to know anything about that – it just says send this label.”

The microservices approach has also allowed the company to use different languages to build different parts of its platform. Although main website and core application are Java-based, Freight Exchange has also used Python and Node.js.

Hann said that in the year and a half or so since the company has been getting “material volumes” of bookings through its site, the number of transactions each month has increased by a factor of 100. But, with the increasingly reliance on serverless computing, hosting costs have remained about the same.

“I think what’s quite important about it is that it actually starts linking your IT costs to your transaction costs,” the CTO added.

“Instead of having your service in there churning away 24/7 and it may handle one job or it may handle a million jobs — it makes no difference to your costs — we are now saying we know what it costs to process one job in terms of the resources it consumes.”

“You never quite get there — like our core website can’t be serverless, it has to be up 24/7 so it doesn’t lend itself to that,” the CTO added.

“The key thing is: We haven’t had to increase the number of servers we run, despite the growth of the business. We have the same number of instances we had now as we had when we started – which is two.”

Hann said he believes that cost model of serverless architecture will be a compelling driver for broader adoption by businesses.

“Most businesses have quiet time, whether it’s overnight, weekends, during the day,” he said.

The CTO said he has seen three broad stages of evolution in the journey towards serverless architecture.

“If you go back, you had on-premise servers: You went off, you bought a new server or severs and you had to persuade the CFO to hand over a load of money, and then you hoped it was going to do you for the next five years or so until you’ve depreciated it. If you’re a growing business like us, that’s kind of difficult to do that because you just don’t know where you’re going to be in two, three, four years’ time.

“Businesses then went, ‘Okay now let’s go into the cloud’. We picked up our servers and moved them off to someone else’s data centre. But you still have to plan for capacity; you still have to decide how many [instances], what size, etc. — things like that.

“The upside is, of course, that you can scale them very, very easily. You’re not sinking a bunch of money into it, but you’ve still got to plan and monitor your capacity and make decisions about when’s the right time to upgrade.”

“As soon you go serverless you effectively say, ‘I now don’t care anymore. I’m just going to let the system run as often as it needs to and increase and decrease as I need it to,’” the CTO said.

 “That’s where you’re running functions as code, which is what we do a lot of, but there’s also the service element,” he added. For example, Freight Exchange uses AWS’s NoSQL database service DynamoDB.

“We just consume a database product,” Hann said. “How it gets delivered to us is irrelevant; we don’t care. We just know it’s going to be there. From our point of view we don’t have dedicated resources – we simply pay for, crudely speaking, the number of queries we run or the capacity we consume, whichever way you want to look at it.

“People now are really getting away from having to worry about any aspect of the infrastructure. Getting it into someone else’s data centre was great; you stopped having to worry about power, Internet and stuff like that, but you still had to worry about capacity. Now you’re getting away from that.”

The downside, he said, is that the move to serverless isn’t a lift and shift exercise.

“You actually have to architect things to work in a particular way,” he said. “You can’t just magically make SAP run serverless – it doesn’t work like that; that’s just not a thing.”

From that perspective it’s easiest for organisations that are developing their own applications, Hann said. It’s also a great approach for breaking down monolithic legacy systems, he added.

“This is potentially a way of starting to break those up by just taking small chunks out of them and saying, ‘Right, let’s just whack that up into a serverless function’ and put an API in front of it. If that works, great then you do the next bit, then you do the next bit, then you do the next bit.”

“If you’re going to sit down and go, ‘Right let’s rewrite our entire application to be serverless’, I would say that’s probably a bad idea,” he said.

“Maybe you’ll eventually get rid of the whole thing; maybe you won’t’ ever quite get to that. But it doesn’t matter: You’re all the time chipping away and the core bit is getting smaller and you don’t need the same resources. You’re just gradually getting rid of it, but a chunk at a time.”

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingCase StudyamazonAmazon Web ServicesCase studiesserverlessserverless architecture

More about 24/7AmazonAmazon Web ServicesAWS

Show Comments