Financial services company IOOF has been through a half decade journey involving a major shift in its core superannuation system from a monolithic application to a microservices-based architecture.
As part of the process, the relationship between the company's software development and infrastructure teams has changed, with self-service infrastructure playing a key role in facilitating the microservices approach.
IOOF provides financial advice, superannuation, investment and trustee services. It has about $150 billion invested on behalf of 650,000 members.
Part of its growth has come through acquisitions, and in 2009 it merged with Australian Wealth Management, which itself had undertaken a significant number of acquisitions.
As a result, the post-merger IOOF inherited a significant technical debt in the form of multiple IT systems with overlapping functionality.
The company embarked on a program to consolidate its data centres and infrastructure.
"In particular, we really needed to focus on consolidating our superannuation platforms," Fabian Iannarella, IOOF's team leader for infrastructure service capability, told Gartner IT Infrastructure, Operations and Data Centre Summit in Sydney earlier this week.
IOOF decided to consolidate on ASIS: The AWM Super Investment System. ASIS comprises a core system and database, overnight processing, public-facing systems, such as secure websites for customers and financial advisers, and an investment system that interfaces with trading systems to buy shares.
"Only thing is hasn't got is a coffee button," said Iannarella.
"Even though this system is great, it does have problems and we saw these problems pretty much up front," he added.
The system was originally built and maintained by a small group of developers. It initially had eight developers maintaining it — post-merger it had 60 developers maintaining the extremely complex "monster code base", which had more than a million lines of code.
"There's a lot going on in there and it's very difficult to co-ordinate 60 people making changes and then releasing those changes," IOOF's infrastructure services lead said.
"All that co-ordination takes a lot of time," Iannarella added.
"The tests take a lot of time to run and we pretty quickly ... realised that in order to move forward with that platform, we needed to change it and we needed to change the way we thought about it."
IOOF decided to switch architecture from a monolithic system to microservices.
"We wanted to break up the system into a number of interconnected services and each one of these will specialise in a very specialised task," Karl Chu, a software development and continuous delivery engineer at ThoughtWorks who worked on the project, told the Gartner conference.
The switch would make it possible to scale the system horizontally, make changes easier to deploy and open more opportunities for innovation.
Instead of dealing with a million lines of code, "you probably have a few thousand lines of code in one microservice and the backend data tends to be a lot simpler as well," Chu said.
"So instead of a relational database that would have 50, 100 tables, which is quite typical for large systems, you might have two, three or at most a handful of database tables at the backend."
Microservices would offer the ability to "mix and match" technology stacks, Chu said, which was important given ASIS was originally written in Delphi.
"There's not a whole lot of people out there with the skill to make changes to that system, so finding people to maintain it was a challenge," Chu said.
With microservices, if IOOF decides to make a change to ASIS "we can choose the best technology, the best language, best data backend system," the developer added
The change in architecture has lowered the risk of experimenting with new technologies. "We might try something new like Golang and see how it works," he explained. "And the worst case scenario it doesn't work out [and] we throw away a few thousand lines of code and start again. No big deal."
However, there was a catch during the shift to microservices: "Instead of having one machine that we need to maintain, now each microservice probably will have its own machine. So now we have lots of machines to provision, configure and maintain. And that creates an additional demand on the infrastructure team."
"It wouldn't be a problem if our server provisioning process was very simple and quick. Unfortunately our process was highly manual," Chu said.
Server provisioning previously involved the development team filing a support ticket with the infrastructure services team. The infrastructure team would manually allocate names and IP addresses, keeping track of both in Excel spreadsheets, then configure and create a new VM.
It would take, optimistically, about three weeks from initial ticket to the developers getting their new server. Miscommunication between developers and the infrastructure team would also sometimes lead to misunderstandings about requirements, delaying the process even further.
"Suffice it to say, the delivery teams they were not very happy," Chu said. The infrastructure team would get blamed for delays and there was tension between the two sides. The software delivery side wasn't happy at the length of time it took to get a server, and the infrastructure team wasn't happy because of the amount of manual, mundane work that provisioning involved for them.
"So we looked at all these problems both technically and culturally and started to think about 'how are we going to dig ourselves out of this hole'," Chu said.
A decision was made to move towards self-serve infrastructure, freeing up the infrastructure team from the "mind-numbing work of picking numbers out of a spreadsheet and watching the console scroll by".
"But more importantly, we wanted to empower the delivery teams to take control over the provisioning process," Chu said. The teams wanted to move in the direction of DevOps and continuous delivery, Chu said.
IOOF started off by automating the provisioning process, assigning dedicated a team of a couple of developers and Iannarella to the project. "We automated that bit by bit," Chu said, freeing up the infrastructure team from some of the manual work they had previously been engaged in.
One of the delivery teams was recruited as a guinea pig to make sure that the changes that were being made were useful.
"We ... wanted to partner with the delivery teams — that cultural factor that comes into play, that we really want to break down that ... cultural barrier between infrastructure and delivery teams," Chu said.
IOOF ended up using Puppet for configuration management. The team running the project helped the guinea pig delivery team write the Puppet manifests they needed.
At that stage the process wasn't completely automated and the delivery teams would still need to engage with infrastructure services but the whole process was reduced from around three weeks to one or two days.
That caught the attention of other delivery teams, Chu said. However, the growing interest was double-edged: The infrastructure team was worried about running out of licences in IOOF's VMware environment
Read more: Red Hat to launch DevOps training services
"The natural progression from there is that we extended our toolchain to also allow the delivery teams to provision into AWS, just to manage the capacity issue, so at least for dev/test they can go to the cloud," Chu said.
"And we wrapped everything up inside an API that the delivery teams can call," he added. "After we had done that ... we just opened up the API to everyone who wants to use it."
Now the provisioning process has been reduced to around one to two hours. The role of the infrastructure services team has changed from provisioning serves to maintaining the provisioning platform they have built.
"Our relationship with the delivery teams also changed," Chu said.
Read more: DevOps vs ITIL?
"Our role has changed from being a pair of hands that build servers for them, to become advisers to the delivery teams, to advise them on any infrastructure-related issues and help them solve infrastructure-related problems."
Software projects can now be up and running within a couple of hours, Iannarella said, and software development efforts aren't constrained by the limitations of on-premise infrastructure.
"You run your project, you find out... what your workload's like, then you can talk to infrastructure and find out if it's worth bringing that workload back in-house, so we go procure more hardware if we decide it's needed."
"The teams are really excited about this," Iannarella said. "They're jumping in, looking at the code that we've written and ... making suggestions on features. Some guys are actually getting in there [and] writing the features themselves."
Read more: Macanta launches online DevOps training
"Everybody's involved that wants to be involved," Iannarella said.
"Delivery teams are happy - they can build their own servers. Infrastructure's happy - they haven't had a ticket raised for a server build in over 12 months," he added
Withiout the shift in IOOF's infrastructure delivery model, the move to microservices wouldn't have been feasible, Iannarella said.
The next challenge is working on how microservices are deployed. Currently, ASIS comprises around 40 microservices and that number is growing rapidly.
Iannarella said the ideal situation would be a single click to deploy all the microservices required in a particular environment, whether it's production or dev. IOOF has decided to employ Docker for the task, he added.
A key lesson of the whole process has been to start with something small and use it straight away, Iannarella said. "Just get it out there." Strive to make it useful, and remember that it's okay to fail, he said.
Showing off incremental progress and getting feedback were also important.
Having a dedicated team with the right expertise is vital, he added. "Remove all their distractions," he said.
"The most important thing we found is you need buy-in from the top," Iannarella said.