At ControlUp, we have noticed a growing number of our customers are adopting or evaluating Infrastructure as Code (IaC) methods for their datacenter management. The general idea behind IaC is declaring your IT infrastructure in code or in a template. IT professionals (not developers!) write, test, debug and execute this code, which includes templates and machine-readable definition files. The code automates operational IT processes, such as defining, provisioning, configuring, changing, updating, or destroying IT infrastructure elements.
The goal is to allow IT administrators to deliver a predictable state of their infrastructure that can easily be deployed, configured, and managed in an automated manner. The target infrastructure is hosted either in on-premises data centers or in the cloud. The automation applies to hypervisors, servers, virtual machines, networks, databases, user accounts, policies, and applications. In essence, Infrastructure as Code is a kind of “container term” for the use of code and templates to declare the desired state of IT infrastructure. By adding operational process methodologies for life cycle management to this picture, it turns into DevOps: blending development and operations.
IT infrastructure automation is not a new concept. It started more than 15 years ago when, in highly regulated market segments, like financial services, a concept called “staging” became the de facto standard. Staging means that multiple independent environments within one organization represent the different stages in the IT infrastructure lifecycle. Development, testing, acceptance, and production (DTAP) were the typical stages in large enterprise environments, where each stage encapsulated separate infrastructure components in an isolated network segment. The rule was that in development and test environments, only manual installation processes were allowed. In acceptance and end-user-facing production environments, all provisioning and change processes had to be fully automated to ensure reproducibility, stability, and compliance. It was all about eliminating the human factor to avoid errors, typically by using scripting frameworks.
Over time, staging was widely adopted by other enterprise types or public administrations, even in small- and medium-sized businesses. The scale of the staging environments would not be like sophisticated and expensive DTAP environments. Sometimes a massively shrunken copy of the production environment, running on a bunch of virtual machines, provided a good enough test environment. That brings us to today, where the goal is to test changes and upgrades before they are put into production. Many of the ControlUp proof-of-concept installations are initially made in test lab environments and only on success are they put into production.
A prerequisite for staging is that all deployment and configuration steps can be initiated from the command line by using installation packages with parameters that expose the full range of possible setup options (unattended installation). In highly regulated environments, chained installation sequences must be finished within half of the time scheduled for planned maintenance windows. If half of the time is over, but the deployment process has not completed, a rollback to the previous stable system setup is initiated automatically. It is like cave diving where it is mandatory to turn around before you have used up half of the air (as an experienced SCUBA diver, I can assure you that there is never an exception to this rule).
In the early phases of staging, scripting all deployment, configuration and change steps was the way to go. In essence, all commands an administrator would run manually had to be baked into shell script code. Developing complex script frameworks, allowing IT administrators to deploy (or roll back) an entire enterprise infrastructure automatically was painful and time-consuming, as it was based on imperative code that explicitly spelled out each step required for any infrastructure modification. Experienced shell scripters were the masters of the universe in large corporate data centers, and many of them didn’t like to share their scripting secrets. IaC and DevOps changed this dramatically.
In a modern DevOps world, IT administrators use tools and methods that go far beyond traditional step-by-step scripting. Instead, they are using declarative code that simply states what needs to be done. There are definition files and templates that describe the desired state of all IT infrastructure elements. Powerful IaC tools consuming these definition files and templates are responsible for infrastructure modifications when needed. Edits in the definition files control the provisioning of the target infrastructure and are the basis for automated configuration changes. Whether the goal is to build a new environment from scratch or to apply minor changes to an existing environment, there is no difference. This results in faster software delivery and significantly reduces time and costs. Another DevOps benefit is built-in collaboration and source control, which allows IT administrators to quickly revert to the previous version if something breaks. Now add staging to the picture and it’s easy to understand how this all helps to improve agility and security.
At ControlUp, we are fully committed to DevOps. Our product installation packages have a wide set of setup parameters and some of our central infrastructure components were implemented as containers, allowing a full integration into DevOps workflows. We used DevOps methods to deploy our new cloud backend in the EMEA region, and we have DevOps initiatives to allow an “evergreen” ControlUp agent delivery. We are working on a public API, which will allow our customers to integrate our full product stack into their DevOps workflows. But this only scratches the surface and there is a lot more to come!
Stay tuned for upcoming blog articles with more details on using ControlUp in the world of DevOps and IaC.