How workflow capabilities benefit continuous delivery environments

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Wikipedia defines workflow as "an orchestrated and repeatable pattern of business activity enabled by the systematic organization of resources into processes" - processes that make things or just generally get work done. Manufacturers can thank workflows for revolutionizing the production of everything from cars to chocolate bars. Management wonks have built careers on applying workflow theories like Lean and TQM to their business processes.

What does workflow mean to the people who create software? Years ago, probably not much. While this is a field where there's plenty of complicated work to move along a conceptual assembly line, the actual process of building software historically has included so many zigs and zags that the prototypical pathway from A to Z is less of a straight line and more of a sideways fever chart.

But today, workflow, as a concept, is gaining traction in software circles, with the universal push to increase businesses' speed, agility and focus on the customer. It's emerging as a key component in an advanced discipline called continuous delivery that enables organizations to conduct frequent, small updates to apps so companies can respond to changing business needs.

So, how does workflow actually work in continuous delivery environments? How do companies make it happen? What kinds of pains have they experienced that have pushed them to adopt workflow techniques? And what kinds of benefits are they getting?

To answer these questions, it makes sense to look at how software moves through a continuous delivery pipeline. It goes through a series of stages to ensure that it's being built, tested and deployed properly. While organizations set up their pipelines according to their own needs, a typical pipeline might involve a string of performance tests, Selenium tests for multiple browsers, Sonar analysis, user acceptance tests and deployments to staging and production. To tie the process together, an organization would probably use a set of orchestration tools such as the ones available in open source Jenkins.

Assessing your processes

Some software processes are simpler than others. If the series of steps in a pipeline is simple and predictable enough, it can be relatively easy to define a pipeline that repeats flawlessly -- like a factory running at full capacity.

But this is rare, especially in large organizations. Most software delivery environments are much more complicated, requiring steps that need to be defined, executed, revised, run in parallel, shelved, restarted, saved, fixed, tested, retested and reworked countless times.

Continuous delivery itself smooths out these uneven processes to a great extent, but it doesn't eliminate complexity all by itself. Even in the most well-defined pipelines, steps are built in to sometimes stop, veer left or double back over some of the same ground. Things can change -- abruptly, sometimes painfully -- and pipelines need to account for that.

The more complicated a pipeline gets, the more time and cost get piled onto a job. The solution: automate the pipeline. Create a workflow that moves the build from stage to stage, automatically, based on the successful completion of a process -- accounting for any and all tricky hand-offs embedded within the pipeline design.

Again, for simple pipelines, this may not be a hard task. But, for complicated pipelines, there are a lot of issues to plan for. Here are a few:

  • Multiple stages -- In large organizations, you may have a long list of stages to accommodate, with some of them occurring in different locations, involving different teams.
  • Forks and loops -- Pipelines aren't always linear. Sometimes, you'll want to build in a re-test or a re-work, assuming some flaws will creep in at a certain stage.
  • Outages -- They happen. If you have a long pipeline, you want to have a workflow engine ensure that jobs get saved in the event of an outage.
  • Human interaction -- For some steps, you want a human to check the build. Workflows should accommodate the planned -- and unplanned -- intervention of human hands.
  • Errors -- They also happen. When errors crop up, you want an automated process to let you restart where you left off.
  • Reusable builds -- In the case of transient errors, the automation engine should allow builds to be used and re-used to ensure that processes move forward.

In the past, software teams have automated parts of the pipeline process using a variety of tools and plugins. They have combined the resources in different ways, sometimes varying from job to job. Pipelines would get defined, and builds would move from stage to stage in a chain of jobs -- sometimes automatically, sometimes with human guidance, with varying degrees of success.

As the pipeline automation concept has advanced, new tools are emerging that program in many of the variables that have thrown wrenches into more complex pipelines over the years. Some of the tools are delivered by vendors with big stakes in the continuous delivery process --such as Chef, Puppet, Serena and Pivotal. Other popular continuous delivery tools have their roots in open source, such as Jenkins CI.

While we are mentioning Jenkins, the community recently introduced functionality to specifically help automate workflows. The Jenkins Workflow plugin gives a software team the ability to automate the whole application lifecycle -- simple and complex workflows, automation processes and manual steps. Teams can now orchestrate the entire sofrtware delilvery process with Jenkins, automatically moving code from stage to stage and measuring the performance of an activity at any stage of the process.

Over the last 10 years continuous integration brought tangible improvements to the software delivery lifecycle -- improvements that enabled the adoption of agile delivery practices. The industry continues to evolve. Continuous delivery has given teams the ability to extend beyond integration to a fully formed, tightly wound delivery process drawing on tools and technologies that work together in concert.

Workflow brings continuous delivery forward another step, helping teams link together complex pipelines and automate tasks every step of the way. For those who care about software, workflow means business.

CloudBees, the Enterprise Jenkins Company, is the continuous delivery (CD) leader. CloudBees provides solutions that enable IT organizations to respond rapidly to the software delivery needs of the business. Building on the strength of Jenkins CI, the world's most popular open source continuous delivery hub and ecosystem, the CloudBees Continuous Delivery Platform provides a wide range of CD solutions for use on-premise and in the cloud that meet the unique security, scalability and manageability needs of enterprises. The CloudBees Jenkins-based CD solutions support many of the world's largest and most business-critical deployments.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags softwarewikipedia

More about PivotalSeleniumWikipedia

Show Comments
[]