Jenkins in the Ops space is in general already painful. Lately the deprecation of the multiple-scms plugin caused some headache, becaue we relied heavily on it to generate pipelines in a Seedjob based on structure inside secondary repositories. We kind of started from scratch now and ship parameterized pipelines defined in Jenkinsfiles in those secondary repositories. Basically that is the way it should be, you store the pipeline definition along with code you'd like to execute. In our case that is mostly terraform and ansible.

Problem

Directory structure is roughly "stage" -> "project" -> "service". We'd like to have one job pipeline per project, which dynamically reads all service folder names and offers those as available parameters. A service folder is the smallest entity we manage with terraform in a separate state file.

Now Jenkins pipelines are by intention limited, but you can add some groovy at will if you whitelist the usage in Jenkins. You have to click through some security though to make it work.

Jenkinsfile

This is basically a commented version of the Jenkinsfile we copy now around as a template, to be manually adjusted per project.

// Syntax: https://jenkins.io/doc/book/pipeline/syntax/
// project name as we use it in the folder structure and job name
def TfProject = "myproject-I-dev"
// directory relative to the repo checkout inside the jenkins workspace
def jobDirectory = "terraform/dev/${TfProject}"
// informational string to describe the stage or project
def stageEnvDescription = "DEV"

/* Attention please if you rebuild the Jenkins instance consider the following:

- You've to run this job at least *thrice*. It first has to checkout the
repository, then you've to add permisions for the groovy part, and on
the third run you can gather the list of available terraform folder.

- As a safeguard the first first folder name is always the invalid string
"choose-one". That prevents accidential execution of a random project.

- If you add new terraform folder you've to run the "choose-one" dummy rollout so
the dynamic parameters pick up the new folder. */

/* Here we hardcode the path to the correct job workspace on the jenkins host, and
   discover the service folder list. We have to filter it slightly to avoid temporary folders created by Jenkins (like @tmp folders). */
List tffolder = new File("/var/lib/jenkins/jobs/terraform ${TfProject}/workspace/${jobDirectory}").listFiles().findAll { it.isDirectory() && it.name ==~ /(?i)[a-z0-9_-]+/ }.sort()
/* ensure the "choose-one" dummy entry is always the first in the list, otherwise
   initial executions might execute something. By default the first parameter is
   used if none is selected */
tffolder.add(0,"choose-one")

pipeline {
    agent any
    /* Show a choice parameter with the service directory list we stored
       above in the variable tffolder */
    parameters {
        choice(name: "TFFOLDER", choices: tffolder)
    }
    // Configure logrotation and coloring.
    options {
        buildDiscarder(logRotator(daysToKeepStr: "30", numToKeepStr: "100"))
        ansiColor("xterm")
    }
    // Set some variables for terraform to pick up the right service account.
    environment {
        GOOGLE_CLOUD_KEYFILE_JSON = '/var/lib/jenkins/cicd.json'
        GOOGLE_APPLICATION_CREDENTIALS = '/var/lib/jenkins/cicd.json'
    }

stages {
    stage('TF Plan') {
    /* Make sure on every stage that we only execute if the
       choice parameter is not the dummy one. Ensures we
       can run the pipeline smoothly for re-reading the
       service directories. */
    when { expression { params.TFFOLDER != "choose-one" } }
    steps {
        /* Initialize terraform and generate a plan in the selected
           service folder. */
        dir("${params.TFFOLDER}") {
        sh 'terraform init -no-color -upgrade=true'
        sh 'terraform plan -no-color -out myplan'
        }
        // Read in the repo name we act on for informational output.
        script {
            remoteRepo = sh(returnStdout: true, script: 'git remote get-url origin').trim()
        }
        echo "INFO: job *${JOB_NAME}* in *${params.TFFOLDER}* on branch *${GIT_BRANCH}* of repo *${remoteRepo}*"
    }
    }
    stage('TF Apply') {
    /* Run terraform apply only after manual acknowledgement, we have to
       make sure that the when { } condition is actually evaluated before
       the input. Default is input before when. */
    when {
        beforeInput true
        expression { params.TFFOLDER != "choose-one" }
    }
    input {
        message "Cowboy would you really like to run **${JOB_NAME}** in **${params.TFFOLDER}**"
        ok "Apply ${JOB_NAME} to ${stageEnvDescription}"
    }
    steps {
        dir("${params.TFFOLDER}") {
        sh 'terraform apply -no-color -input=false myplan'
        }
    }
    }
}
    post {
            failure {
                // You can also alert to noisy chat platforms on failures if you like.
                echo "job failed"
            }
        }

job-dsl side of the story

Having all those when { } conditions in the pipeline stages above allows us to create a dependency between successful Seedjob executions and just let that trigger the execution of the pipeline jobs. This is important because the Seedjob execution itself will reset all pipeline jobs, so your dynamic parameters are gone. By making sure we can re-execute the job, and doing that automatically, we still have up to date parameterized pipelines, whenever the Seedjob ran successfully.

The job-dsl script looks like this:

import javaposse.jobdsl.dsl.DslScriptLoader;
import javaposse.jobdsl.plugin.JenkinsJobManagement;
import javaposse.jobdsl.plugin.ExecuteDslScripts;
def params = [
    // Defaults are repo: mycorp/admin, branch: master, jenkinsFilename: Jenkinsfile
    pipelineJobs: [
        [name: 'terraform myproject-I-dev', jenkinsFilename: 'terraform/dev/myproject-I-dev/Jenkinsfile', upstream: 'Seedjob'],
        [name: 'terraform myproject-I-prod', jenkinsFilename: 'terraform/prod/myproject-I-prod/Jenkinsfile', upstream: 'Seedjob'],
    ],
]

params.pipelineJobs.each { job ->
    pipelineJob(job.name) {
        definition {
            cpsScm {
                // assume admin and branch master as a default, look for Jenkinsfile
                def repo = job.repo ?: 'mycorp/admin'
                def branch = job.branch ?: 'master'
                def jenkinsFilename = job.jenkinsFilename ?: 'Jenkinsfile'
                scm {
                    git("ssh://git@github.com/${repo}.git", branch)
                }
                scriptPath(jenkinsFilename)
            }
        }
        properties {
            pipelineTriggers {
                triggers {
                    if(job.upstream) {
                        upstream {
                            upstreamProjects("${job.upstream}")
                            threshold('SUCCESS')
                        }
                    }
                }
            }
        }
    }
}

Disadvantages

There are still a bunch of disadvantages you've to consider

Jenkins Rebuilds are Painful

In general we rebuild our Jenkins instances quite frequently. With the approach outlined here in place, you've to allow the groovy script execution after the first Seedjob execution, and then go through at least another round of run the job, allow permissions, run the job, until it's finally all up and running.

Copy around Jenkinsfile

Whenever you create a new project you've to copy around Jenkinsfiles for each and every stage and modify the variables at the top accordingly.

Keep the Seedjob definitions and Jenkinsfile in Sync

You not only have to copy the Jenkinsfile around, but you also have to keep the variables and names in sync with what you define for the Seedjob. Sadly the pipeline env-vars are not available outside of the pipeline when we execute the groovy parts.

Kudos

This setup was crafted with a lot of help by Michael and Eric.