While a previous article in this ControlUp DevOps series walked you from staging to DevOps and Infrastructure as Code from a conceptual standpoint, this article goes into a more practical aspect.
Customers introducing DevOps often merge their development, deployment, operations, quality assurance, and security teams that used to work independently.
Historically, system administrators and software developers have taken different positions about how to build and maintain production VDI environments. While developers want to push new software versions to production as quickly as possible, system administrators tend to focus on keeping their production environments up and running without too many changes. This explains why switching to a DevOps model is disruptive for Citrix or VMware admins, since it results in the introduction of cross-functional continuous integration and continuous deployment processes.
When switching to a DevOps model, the primary goals are improved agility, continuity, and consistency, as well as reduced deployment failures, rollbacks, and time to recover. In this new DevOps world, IT infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration. This includes short iterations to frequently push system components and applications to production, supported by automated processes. In this context, organizations use ControlUp to monitor metrics and logs to see how application and infrastructure performance impacts the digital employee experience.
But it’s not just ControlUp customers who are switching to this new model. Internally, ControlUp has also introduced DevOps methods to manage the global cloud backplane for customer environments. The image below was provided by ControlUp’s Director of DevOps, Dotan Gutmacher, and it shows all DevOps tools he and his team used when building the new European backplane in the Frankfurt cloud datacenter location.
Configuration Management is a familiar tool category for most system administrators. This kind of tool is designed to change operating system, software, and user settings on existing physical endpoints or virtual machines.
Examples are Red Hat Ansible, Progress Chef, Puppet, and Azure Bicep. Typically, they take a procedural approach, which is well-known to anyone experienced in scripting. Users create “playbooks” that are evaluated from top to bottom and executed in sequence. Ansible playbooks are expressed as code in YAML format with minimum syntax. Typical use cases are changing the ControlUp configuration or modifying the Windows operating system settings in such a way that they are compatible with ControlUp.
A typical DevOps cycle goes far beyond simple configuration management. Provisioning server VMs, database servers, load balancers, subnets, firewalls is another task. Examples for infrastructure provisioning tools are Hashicorp Terraform (Infrastructure as Code), Hashicorp Packer (Image as Code), Amazon Web Services (AWS) CloudFormation or OpenStack Heat.
These tools make API calls to Providers, such as VMware vSphere, AWS or Azure to create the required infrastructure, which may be “immutable.” Immutable infrastructure is designed to create a new server from a machine image or a container image every time. If the servers need to be updated, you replace them with new servers. The infrastructure provisioning automation interprets models described in Hashicorp Configuration Language (HCL), YAML, or JSON. ControlUp software components can be part of such a model.
Even though infrastructure deployment and configuration management can be used independently, a common approach is to use them together. For example, you can use Terraform to build virtual PCs, subnets, internet gateways, load balancers, and VMs and then use Ansible to configure and deploy services on these instances. All tools have in common that they are based on code that should be developed and maintained in a version control system.
The preferred source code management framework in this DevOps context is Git, a free and open-source peer-to-peer system for storing source code and software packages in repositories, or “repos” for short. Version control through Git allows developers to easily manage changes to projects, keep track of various versions of source code, and collaborate on any section of code without creating conflicting issues in those proposed changes. Since Git only has a command-line interface, most DevOps developers prefer a web-based Git repository hosting service, such as Gitea, GitLab, GitHub or BitBucket. The web interface makes the usage of Git so much easier, in particular for Windows system administrators.
The final component we want to look at is the Continuous Integration and Delivery (CI / CD) pipeline. It’s probably the DevOps element that’s the most confusing for many system administrators, but it stitches together all components we talked about earlier.
In a nutshell, the CI / CD pipeline is a series of infrastructure deployment and configuration management steps that must be performed to deliver a new version of software or IT infrastructure. Continuous integration is a code development practice where developers regularly merge their code changes into a central Git repository on which automated builds and tests are run. Continuous delivery means that code changes are automatically built, tested, and prepared for a release to production. Examples for CI / CD pipelines are Drone, GitHub Actions, Azure DevOps, Red Hat OpenShift, AWS CodeBuild, or Jenkins.
Running deployment scripts from the command line is not necessary when using a CI / CD pipeline. Everything is pushed automatically to providers, such as VMware vSphere, AWS or Microsoft Azure. This can feel scary at the beginning; it’s like losing control of the VDI production environment. Fortunately, it’s possible to embed all of this into a staging model that includes development, testing, and acceptance phases before final deployment to production. A great benefit of an automation cycle like this is that it can play a significant role in disaster recovery situations. Teams who fully embrace these DevOps practices work faster and deliver better quality to their customers or end-users. The increased use of automation and cross-functional collaboration helps to reduce complexity and errors.
Now that we’ve talked about all necessary elements to build a VDI infrastructure based on a DevOps model, we want to hear what experts in the ControlUp ecosystem have to say!
To wrap it up, there is only one thing to say: Using DevOps methods for virtual desktop infrastructure deployment and disaster recovery is the next big thing for IT system administrators. By including ControlUp, end-to-end VDI monitoring and digital employee experience management capabilities are added to the overall picture, giving back full operational control to system administrators.