Jakob VendegnaBlogGitHubResume

A Pipelines First Approach

06 December, 2018 - 6 min read

Writing applications from a pipelines-first approach can be a confusing place to start. What is there to do before any application code exists? Looking at the state of software development currently, with an empathetic eye towards the demands of modern business, I assert that starting any project with a build-pipeline first will help facilitate the rapid pace of development the market has come to expect. Here I'll attempt to make my case and give you a little insight as to my thought process during project initialization and why I value this approach to development. Doing so will require why understanding a pipelines-first approach is better suited to long-term maintenance of a project, how deploying nothing first will set up the project for successful and stress-free deployments far into the future, and understanding the role of the build server at project initialization. First understanding why taking a pipelines-first approach is better suited to maintaining a project is like understanding why rappers write bars to beats, and not the other way around. Everyone gets excited about a new project, and the temptation to jump in and start writing code is strong. But look at it in another way. Rappers don't write their bars and then try to lay them to a beat (the good ones don't anyway). You have to feel the beat, you have to write lyrics that play off the established timing of the beat, that's how you establish a cadence. In the same manner DevOps professionals should know better than to start writing code for an application without considering first where it will run, how it will run, how users will connect with it, what updating the application looks like, and how it is deployed. Not heeding this warning will result in a hodgepodge of patch-work that may work, but will likely include more toil, greater iteration spread, and a weaker understanding of the overall stack than we should tolerate. Establishing a pipelines-first approach is the same approach as writing the beat before the bars. So what does it look like in practice?

Write no code, deploy nothing. Okay, that's a little cheesy and isn't quite right, but it conveys an approach. Let's establish a type of application so we can talk in more specific terms. The application we will be writing (theorhetically) is a command line application that we will use to manage a lot of our daily work, we'll call it the 'devops-cli'. We have decided that using NodeJS and the Commander npm package will be our 'framework' of choice. What the application does is of no importance in this moment as we can imply some of the ops related work already. Knowing it's a NodeJS app, we can infer that we can easily containerize the app. We then have a choice to make regarding how to run the container. At this point we can start to deploy nothing first. Create your directory, initialize a repository, add a readme and a Dockerfile. At the top of the Dockerfile add a from line like FROM nodejs:10 and leave it just like that. Build the container locally and run it. Watch NodeJS start up and promptly exit. Great! You have an application that does nothing. Time to deploy it.

Deploying an application from a pipelines-first approach requires a build server, understanding what it should do at this phase is a little abstract. Most orgs choose to use a managed solution like Travis-CI or Jenkins as a build server. Personally I like to use K8S jobs in conjunction with the Tekton CRD's in my personal autoscaling preemptive cluster. Whatever your preferred method, the time has come to start writing a build spec. The only thing this iteration of the pipeline should do is build the container from the Dockerfile. No need to push it to a registry yet, after all it's just a nodejs:10 container. At this point you can finally npm init and set up the package, but don't get too excited yet. First things first, time to write some unit tests. So you write one that tests a function, lets say 'sendSlack(msg)'. Great! Now you need to update your Dockerfile to match. It should now contain a WORKDIR section and COPY your code into it. Updating the build-spec at this point to include pushing the container to a registry as well as rolling out a deployment would be great next steps. At this point when you commit and push your build server should kick off a build automatically, build the container, push it to a registry, trigger a deployment job to pull down that container and run it on your orchestration platform of choice. This deployment will trigger your unit test to run which will fail because you haven't written any actual application code yet, there is no 'sendSlack()' function. But that's about to change.

Finally, we get to the good part. So far you have a build-pipeline, a place to host your containers at rest, an orchestration platform to run the container, and some unit tests. Next it's time to wire up that sendSlack() method. Write up the function, commit, and push. You should be able to set what command is called dynamically from within the build-pipeline. From this point you can start to build the meat of your application knowing that each commit will lead to a build that is actually deployed straight to a production environment. Adding other enviroments is obviously always recommended. The end goal here is to deploy a new container each time a branch is merged. No one should be committing straight to master, in any case, ever. That's not what I am advocating for.

I hope that I've been able to persuade you to at least try out a pipelines-first approach to development. I believe that doing so will grant you a better understanding of the SDLC from start to finish as well as reduce the toil of maintaining your application and its respective build-spec over time. I hope you'll give it a shot, what have you got to lose aside from some spaghetti code and headaches?

© Jakob Vendegna 2021, Built with

Gatsby