Aiding compliance in a changing technology environment

14th June 2022BlogRob Batters

Are you ready to get in touch?

Request a Call back

This blog demonstrates how Northdoor helped a SOXs audited client with a large Linux estate, manage technology change in a compliant and robust way. Using tools to automate and audit change delivery, we were able to provide scripts in a consistent and compliant way, testing in a neutral lab before moving them into a live production environment.

 

Keeping up with technology demands

Technology environments change. That’s a fact. They do so with incredible regularity as business challenges grow. Keeping up with these demands is one thing, but not only must resources be found, but there is also the task of maintaining every other system while new projects are delivered.

If that wasn’t enough, the very word ‘change’ brings with it all kinds of other challenges these days. If you work in a SOX compliant world, for example, then like other compliance programmes, not only do you now have to deliver change, you have to do it while keeping a complete record of the process.

Faced with a finite budget and limited number of tools, how is the typical IT team going to do all of this at the same time?

You could hire a team of experts and there are a good many products out there that can help too, but sometimes all it needs is a little know-how and some open-source DevOps software and you can address all kinds of challenges without a major investment.

Technology environments are constantly evolving and this relentless change brings with it all kinds of challenges that IT professionals must keep abreast with, while ensuring compliance. Share on X

Maintaining robust governance across a large Linux estate

At Northdoor we faced one such challenge recently. A SOX audited client with a very large Linux estate needed to ensure that the multiple organisations supporting the applications, databases, operating systems and networks were not making unauthorised changes to the Linux configuration – and in particular to sensitive configuration files and scripts.

In itself, that doesn’t sound like a big deal, except that the rate of change across a very large estate and the natural tendency of a technician – with all the best will in the world – to get something done by making shortcuts, had the potential to cause a big headache.

woman on phone looking at screen

The requirement was therefore to enforce immutability for key files and scripts. All we needed to do was to maintain a central repository of these files containing an approved version and make sure that every Linux system was using the correct one and, if necessary, revert an incorrect version back to the official version.

To solve the problem, we needed to break it down a little more.

1. Automated change delivery

We needed to use something where every change could be delivered from a central point and described using a common method. Ansible was already in use for several regular simple tasks, hosted on a Linux VM in our cloud subscription. Ansible allows us to automate just about any process and with one action run a ‘playbook’ to change, install, upgrade or whatever we need to do, on as many endpoints as we define.

2. Source Control

There had to be somewhere to contain our master copies of everything. We use Atlassian’s Bitbucket, which is a Git-based source code repository tool. Using Bitbucket means that our scripts have ownership and we can ensure the authorised versions are used, whenever they are run.

3. Auditability

Having ensured the veracity of our scripts, we also had to keep an audit trail of all playbook runs. This meant automating the use of Ansible and recording exactly what was done, to which system, when it was done and by whom. For this purpose we chose Jenkins. Jenkins is an open source automation server and scheduler that maintains the build and run history for each job.

4. Testing

Clearly, we were never going to be able to test our code on production servers, or even, given the nature of the task, development or test servers in the client’s server farm. To this end we needed our own compact virtual lab environment.

We deployed one Linux VM as a central controller and then a number of other VMs using various Linux distros, that can be deployed for testing different environments. For rapid creation of a virtual set up we use Vagrant, which we can even deploy to a laptop for rapid testing.

How smart DevOps tools can improve compliance posture

With these tools deployed, we were able to develop scripts and deploy them in a consistent manner, testing them first in a neutral lab before moving to the production environment. The entire production estate is covered, and each run is recorded. The auditors are content that every change is attributable, that any unauthorised changes to the systems are spotted, that such changes can be reverted, and that the very code we are using can be verified.

The rapid pace at which technology environments change is always going to be a challenge when it rubs up against compliance obligations, but by employing a bit of know-how and supplementing it with some smart DevOps tools, you can take the heat off your IT team and improve your compliance posture at the same time.

Interested in finding out more about DevOps?

Request a demo or contact sales on: 0207 448 8500

Contact us

Our Awards & Accreditations