Editor’s Note: This blog post was reprinted with permission from the author.
Working in the technology field for the last 20+ years, I have always been passionate about automation and I am so happy to finally see wide scale adoption across all industries. New automation products and services now dominate the marketplace. IT leaders are echoing phrases like DevOps, Infrastructure as Code (IaC), continuous integration and delivery, etc. in their boardrooms.
In this blog, I will try to make sense of all of this and at the same time try to add some perspective within the context of DevOps. In addition, in future blogs, I will drill down into the automation aspects of DevOps. [Tweet “How manufacturing principles apply to DevOps by @cloud_guy1 via @codeship”]
DevOps is applying time tested manufacturing principles used to produce physical goods to develop and release software. It is common knowledge that Henry Ford invented the modern manufacturing process and the assembly line. In an effort to surpass the American auto industry, Toyota sent Taiichi Ohno to the US to learn about American manufacturing from Ford. After his visit, Ohno developed the TPS (Toyota Production System) and the Ten Precepts of Lean manufacturing.
Both men focus on maximizing the flow of resources (inventory) through the systems while reducing defects. Incidentally, these are the same goals of deploying software. That is, if we can deploy software efficiently, with minimum defects (bugs) by testing before releasing to production, preferably in an automated way, we will not only minimize risks but also reduce our deployment and management costs. In this way, cost savings will be a result and not the goal.
Traditional model for deploying software
DevOps is quite different from the traditional software development model, where many developers work on various parts of the application and then merge the code at the end to create it. If there is a flaw or a change in requirements, the developers have to start from the beginning. Below are some familiar pain points of the traditional model for deploying software:
- Complex organizational structure requiring several teams (handoffs) to complete a single development project.
- Complex software delivery pipeline with many manual operations.
- Knowledge drain after developers leave — No updated documentation or expertise available to support the application. After the project is “completed”, the development teams move on to the next big project.
- During application outages, it is often difficult to identify who needs to be involved or where to find the experts.
- Multiyear development projects that does not deliver incremental value.
- Perception of slow IT that serves as justification to go around IT.
- Information security seen as a road block.
- Infighting between teams — us versus them.
- Resources cannot focus on new opportunities because they are tied up in multi-year projects.
- Teams are terrified to make changes.
- Waiting for new environments to be provisioned (environment, testing, etc.).
- Overproduction and processing — users provisioned the largest VMs because they are afraid of delays in provisioning new capacities or solutions exceeding specs, building unnecessary features, etc.
- Teams work long hours moving code between environments, testing new features and codes, developing tests, fixing defects, etc.
- While code is waiting to be shipped, revenue is lost.
With DevOps, we can apply those same manufacturing principles to deploying and managing software. We can implement the equivalent of Ford’s assembly line, a continuous integration and continuous delivery (CI/CD) pipeline, to build, test and release software as we will see firsthand in future blogs.
DevOps builds upon lean manufacturing principles such as focusing on customer value, attention on time; eliminating waste, shared learning, reducing cycle time, avoiding batching, finding bottlenecks, and identifying and elevating constraints. According to Gene Kim, author of the DevOps Handbook, — “IT is like a factory floor. It is about adding value and increasing flow, reducing wastage and reducing friction between developers and operators; thus, making more profits for the firm.”
Implementing DevOps is not a trivial task and will not happen overnight. It is a journey that will change your organization’s culture and the way your teams’ function, especially developers and operations. Instead of developers and operators having opposing goals, they will now have a common goal. With DevOps, the aim is to automate as much as possible, constantly measuring our progress and finally fostering an environment of collaboration so we can quickly identify risks and make changes.
A key requirement to implementing DevOps is your culture. DevOps change to IT culture and technology aims to remove friction between developers and operators to accelerate the delivery of new capabilities and services. DevOps needs a culture that fosters teamwork, transparency, empowerment, trust, learning and accountability. Teams should be empowered and not afraid to fail or take ownership. They should be able to speak up and use their judgement. Continual learning is also critical for DevOps success. Teams should share knowledge as they learn new skills and use post mortems as learning experience.
In view, the most important element of DevOps is automation. Automation of not only the software development and delivery process, but also the underlying infrastructure that supports the application — public cloud or containers. Automation is possible by some very specialized tools used to stitch together an automation fabric for DevOps and support all stages of the software delivery process.
I cannot over stress the importance of transparency in DevOps. Tools such as Trello and Visual Studio Online provide visibility and transparency to DevOps projects across the organization. This will also help create shared visions and break down barriers.
Tools like ZenDesk and Jira create user stories and track issues, plan sprints, and distribute tasks across your teams.
GitHub functions captures “tribal knowledge” that is important to DevOps team, and turning it into documentation. GIT also helps with compliance by providing a completion audit trail of code changes. Using the GitHub “Request Review” feature, you can make sure that every required compliance reviewer signs off on any new version. Finally, GIT reduce wastage because the process of creating documentation is the same as creating code which everyone on the team can update the documentation.
A key factor of collaboration is the ability to share information. Monitoring earlier in the lifecycle can help provide teams with a common data set shared across different departments to optimize application performance and availability.
DevOps teams will have visibility to the entire application lifecycle and can address issues before we release applications to production.
With the traditional software development model, there is a high probability of bugs and rework. We find defects late in the process and integration is manual and dependent on good documentation.
With continuous integration, developers are delivering code updates faster and more frequently — sometimes several times per day — into a share repository, such as GIT. Each developer is responsible for checking their code in separate branches (instead of a single branch) and compiling regularly. In addition, the build will include automated testing (unit, security, UI, etc.). The entire build and testing process automated. With tools like Jenkins or CircleCI, every time the developers’ checks in code, the automated build runs and the build progress and status is visible to everyone.
CI helps teams test more frequently to discover and address defects (bugs) earlier before they become larger problems. Freeing developers from manual tasks and encouraging behaviors that help reduce the number of errors and defects improves developer productivity. In addition, because of the short feedback loops, teams can make changes more often and react to customer needs faster.
With the traditional model, there is high probability of errors during the deployment process. Installations steps may not be accurate and executed on multiple environments and many things can go wrong because the process is manual. It is typically a very stressful process, which leads to slow delivery of new functionality to users.
Continuous deployment is where software can be deployed to production at any time. Because in most situations, we would like to retain control, we will focus on continuous delivery.
With continuous delivery, we will take the release produced in the CI phase and feed it into a release pipeline. The release pipeline may contain steps similar to manual steps such as ssh to a server, copying files, stopping and starting services and deploy to specific environments (test, dev, prod, etc.).
When it comes to provisioning the release environment for the application, we can use Infrastructure as Code (IaC). Using tools like AWS CloudFormation or HashiCorp Terraform you can programmatically provision the underlying infrastructure on your public cloud platform. Jenkins or CircleCI can orchestrate both application deployment and release environments. Additionally, if you are using Docker containers, the process is even easier using Dockerfiles to define a preconfigured image and Docker-Compose.yml to defined your application environment. While the build pipeline is automated, the release pipeline is on-demand.
In future blogs, we will see firsthand how to setup a CI/CD pipeline using Jenkins and Docker to deploy an application to AWS.
In DevOps, you must continually try to optimize the software delivery processes and as such, it is critical you collect performance, process, and people metrics as often as you can — if you cannot measure, you cannot improve. To help collect and analyze the various metrics a number of tools are available such as Logstach, Kibana, Datadog, New Relic, etc.
Because operations and developers, have a shared responsibility, collaboration between them and other teams is critical. This increased communication and collaboration helps all parts of the organization to align more closely with goals and projects. To help with collaboration, various tools are available to assist with decision making and planning. Some examples are Skype, Lync, Slack and other real-time chat solutions.
Common objections to DevOps
While many of the benefits of DevOps is apparent, there are still some who prefer the traditional software delivery approach and argues that DevOps practices will compromise our security, make compliance even more challenging or will not be able to reskill our teams.
With respect to security, it is easy to make a case that because DevOps relies so heavily on automation, we can bake security objectives into all stages of the development and operation process (in the application blueprints) you will spend significantly less time remediating security issues later. We can also minimize configuration drifts using platform policies or scripts. Check out articles about DevSecOps for more information. Similarly, with compliance, we document all changes in the GIT repository and in the code. Finally, IT is an ever-changing landscape and training is a continual process.
DevOps is becoming more than hype. The most difficult part of adoption is changing our culture; however, if we are successful, the short and long-term benefits will be significant. Cost savings will be a byproduct and not the target, attract the best talent and enabling our teams experiment and fail fast.
As mentioned previously, in the next series of blogs, I will focus on the automation aspects of DevOps to show you how we can develop and deploy software in a consistent way, with minimal human intervention using a CI/CD pipelines and leveraging IaC to automate the underlying infrastructure using the public cloud and Docker.
Until next time… [Tweet “What’s the connection between DevOps and manufacturing? @cloud_guy1 will tell you via @codeship”]