A WordPress.com site dedicated to System Center and Cloud Management

MiCloud Website Changes


Hello everyone,

With the new year approaching, I find that it’s that time again; time for a change. For those that have followed my blog for a while, you’ll recall the change in appearance and more content last year.

This coming year will be more of the same. But, I want to get your opinions first.

The first and most important change that I am planning, is switching from a .wordpress.com hosted website, to hosting my own site in Microsoft Azure. As you may have noticed, I don’t really have any articles about Azure. I’ve been putting it off for a while, but I plan to get more experience/exposure with Azure (including it’s integration with System Center) in the coming year. So why not host my own site on Azure as well?

But, this will mean a change in my website’s URL. In fact, I am going to have to come up with a catchy URL to use. Help me decide on the new URL by voting below.

I’m also looking for feedback on what style/type of content everyone is interested in. So please vote on the following poll to help shape the changes to this site in the new year.

Upcoming Articles


Hello All,

I thought I would add a sticky post to the home page, to list/show what articles and series I am preparing to post. Hope this keeps you coming back for more. Here is the list of upcoming articles (in no particular order). If you would like a specific post/series to appear sooner than later, then send me an email. It’s helpful to know what articles the community is more interested in.

MiCloud Upcoming Articles

  • SCOM SNMP Monitoring
  • SCUtils Knowledge Base For SCSM 2012 (multi-part series)
  • Windows Azure Pack for Windows Server (multi-part series)
  • MCSE Private Cloud Re-certification Exam – Skills Measured (multi-part series)
  • How to Configure Integration with TFS in System Center 2012 R2 Operations Manager
  • Service Provider Foundation (multi-part series)

If you have an idea or request for an article/post, or would like an existing article to be expanded upon, please send me an email via the About Me page and I will do my best to accommodate. Please note that, when suggestions/requests are received, they are queued as draft posts. This means that there may be a delay before a requested post is written and made available, but rest assured, I am working on it.

Don’t forget to rate, comment, subscribe, and share any articles you found helpful or interesting.

Thank you to all my followers.


Lately, I’ve been focusing on DevOps, pipelines, and Terraform. In this article I explore Terraform-Compliance, and reveal what’s good, not so good, and downright confusing about this tool.

Firstly, for reference, Terraform-Compliance is…

a lightweight, security and compliance focused test framework against Terraform to enable negative testing capability for your infrastructure-as-code

Unlike the other tools that I’ve tested and written articles about (namely: Checkov, TFSec, and the GitHub Super-Linter), Terraform-Compliance approaches scanning in a different way. The former tools listed just require you to point to the directory where your Terraform files exist. But with Terraform-Compliance, it requires a Terraform Plan file or the Terraform State file to run against.

Obviously, to produce a .tfplan / .plan file, or have a .tfstate file, that means you need to actually execute your Terraform code. That also means, whatever your target environment is (ie. Azure, AWS, GCP, etc.), will need to be accessible as part of your pipeline.

Behavior-Driven Development (BDD)

To level-set, Terraform-Compliance uses a framework that utilizes a behavior-driven development approach and leverages Radish to process this. This is something new to me, but the Wikipedia article explains it this way:

BDD is largely facilitated through the use of a simple domain-specific language (DSL) using natural-language constructs (e.g., English-like sentences) that can express the behaviour and the expected outcomes.

Without getting into all the details (which are explained on the BDD Reference page), this uses a pattern of…

Given I have <resource_type> defined
When it contains <some_property>
Then its value must be <pattern>
And
its value must match the “<pattern>” regex

Here is a simple example for an Azure Storage Account test…

Given I have azurerm_storage_account defined
Then it must have enable_https_traffic_only
And its value must be true

Obviously, that is a lot easier to read and understand (as a human), when working with testing/validation.

The Good

What I like about Terraform-Compliance, is the clear and understandable output that you receive when the tests complete. For example, from the screenshot below, it’s clear what it’s checking for (ie. the ‘enable_https_traffic_only‘ property), and the value it expects (ie. ‘true‘).

As we can also see in the output, when using Terraform-Compliance in a CI/CD pipeline, it will produce a non-zero exit code if there are any failed tests.

Terraform-Compliance – Completed Scan (Failed Status)

The Bad

Although I’ve worked with several other Terraform scanning tools (as mentioned at the beginning of this article), Terraform-Compliance is different in that it requires additional effort to work with it. It’s not a simple “point it to my Terraform files directory” and let it go.

You Have To Plan

Firstly, as mentioned at the outset, Terraform-Compliance requires a Terraform Plan file or the Terraform State file to execute against. But in reality, that’s not too big of a deal, it just means you have to be able to successfully execute Terraform INIT and Terraform PLAN from your pipeline.

Terraform Plan -Out File

Terraform Plan -Out File

It’s not really a “bad” thing, but it is something you need to take into account, especially if you’re using other tools that just scan all the .TF files without the need for compiling.

Limited Examples

If you read the Terraform-Compliance documentation, in particular for the Behavior-Driven Development (BDD) Features examples, there’s not a lot of them. On the Examples page, there is only 23 listed. But, if you look at the accompanying terraform-compliance/user-friendly-features GitHub repo, there are 20 examples for AWS, and 5 examples for Azure. However, some of the Feature examples contain multiple tests. For example, the Azure Database.feature contains 3 scenarios that it checks for, and the Azure AppService.feature has 6 scenarios.

Still, there currently is a limited set of example Features to work from. But, that can be easily resolved by you, the community! If you start to use Terraform-Compliance, and building your own Features, you can (and should) contribute those back to the greater community.

What is a ‘Feature’?

One other thing I found unclear and confusing, was the concept of a Feature, how it works, and how to create them myself. In the BDD Reference documentation, it does walk you through the various elements the make up a BDD Feature (ie. GIVEN, WHEN, THEN, etc.).

TerraformCompliance-BDDFeature-Example

TerraformCompliance – BDD Feature – Example

But what it doesn’t do, is show how to actually create one! It wasn’t until I looked around the GitHub repository, and at the specific examples, that I realized that the file-type was “.feature”! Something as simple as that should be explained up-front when explaining Behavior-Driven Development (BDD). We should also be provided with a simple framework/template structure to work from. Other languages (ie. Terraform, Azure Resource Manager (ARM) Templates, etc.), all have beginner/starter templates to help people learn how to use the language. The Behavior-Driven Development (BDD) Features documentation should be the same.

The Ugly

Now we get into some of the less-than-positive experiences. Just to be clear, these are not “show stoppers” that should prevent/deter you from using Terraform-Compliance. These are just some really confusing things I ran into, that made my experience frustrating.

Terraform Versions

The first ugly thing I encountered was when using Terraform-Compliance in a Docker container. To be clear, this is a supported method for this tool. And the Usage documentation explains all of the command-line interface (CLI) parameters you can use.

However, what is not documented, is what version of Terraform the tool is using (or how often/frequently this is updated). This is an issue because, you can only find out what version of Terraform you must use by running (and failing) the scan. Case in put (see screenshot), in my pipeline I installed the latest version of Terraform, then ran Terraform INIT and PLAN to produce the required .plan file. When sending this .plan file as an input to Terraform-Compliance (via the –planfile plan_file parameter), I was presented with the message shown below.

TerraformCompliance-InvalidPlanFileTFVersion

TerraformCompliance – Invalid Plan File Terraform Version

That’s not a great end-user experience. Especially since the end-user may have a reason to use an older version of Terraform (ie. they are not ready to upgrade yet, or more testing is needed before they commit to a newer version of Terraform), or a reason to use a newer version of Terraform (ie. they are using some new feature that is only available in the latest version).

In fact, Terraform-Compliance does not support end-user control over what version of Terraform is used! Case in point, at the time of this writing, this is an open issue (Issue#365 Multiple versions of terraform executable in the Docker image).

Workaround

As a workaround (if you’re using the Docker container method), to discover which version of Terraform is present, you will need to just run the container once.

Even though the Docker file has an argument for LATEST_TERRAFORM_VERSION, there is no indication where/how that gets set. It would be useful if there was a way we could check which Terraform version is targetted. That would be better than having to run the container and check for failure.

Of note, if you install Terraform-Compliance natively (ie. not use Docker container method), you will not encounter this issue. This is because, although not apparent from the documentation, Terraform-Compliance uses the Terraform EXE just to convert the TFPLAN file into a plan.out.json file, nothing else! So, you could also perform that conversion yourself after your Terraform PLAN command, like this:

terraform show -json plan.out > plan.out.json

…and then use that .JSON file as the input for Terraform-Compliance. Since it won’t need to convert anything, it will execute without issue.

Extracting Output

The other “ugly” thing I found, was trying to extract the results from Terraform-Compliance. If you’ve read any of my other blog posts (Checkov, TFSec, GitHub Super-Linter), you will know what my goal is. Basically the whole focus on running these types of tools is not only to scan my Terraform code for quality and security issue, but also to incorporate that into an Azure DevOps CI/CD pipeline, and present those results as a Pipeline Test Result.

Using the methods I’ve used with the other referenced tools, I first attempted the following (using StdOut):

docker run --rm --volume $(System.DefaultWorkingDirectory)/Path-That-Has-Your-TFPlan-File:/target --interactive eerkunt/terraform-compliance --features git:https://github.com/terraform-compliance/user-friendly-features.git --planfile plan.out > $(System.DefaultWorkingDirectory)/TFCompliance-Report.xml

But, unfortunately that didn’t work. In the pipeline I saw the following error:

##[warning]Failed to read /home/vsts/work/1/s/TerraformCompliance/TFCompliance-Report.xml. Error : Data at the root level is invalid. Line 1, position 1..

And the resulting XML file looks something like this:

TerraformCompliance-StdOut-XML

TerraformCompliance – StdOut – XML File

If you’ve read my Publishing Checkov Terraform Quality Checks to Azure DevOps Pipelines article, those square characters (ie. ” “) may look familiar. But the challenge is, we don’t know the exact format the Behavior-Driven Development (BDD) output is produced in. In fact, there is no documentation on –output options at all, even though (if you look into the code), the silent_formatter.py file makes reference to Cucumber (which it itself has a reference to JUnit Output Formatter).

Discovery

So, after reaching out to the author (Emre Erkunt), he indicated that Terraform-Compliance supports all parameters that ‘radish-bdd’ uses, one of which is ‘–junit-xml’. He also pointed me to an Issue on the GitHub repo that addressed this very need (Issue#271 Export JUnit XML Report format).

So, in short, we can modify the Terraform-Compliance execution code to include “–junit-xml” as follows:

docker run --volume $(System.DefaultWorkingDirectory)/Path-That-Has-Your-TFPlan-File/:/target --interactive eerkunt/terraform-compliance --junit-xml TFCompliance-Report.xml --features git:https://github.com/terraform-compliance/user-friendly-features.git --planfile plan.out

 

With that supported (but undocumented) output option, I can then capture the XML file and publish the results in my Azure DevOps Pipeline as Test Results.

TerraformCompliance-TestResults

TerraformCompliance – Azure DevOps Pipeline – Test Results

Conclusion

With this clearer understanding of how to use Terraform-Compliance, and Behavior-Driven Development (BDD) Features, it is a useful tool in your Infrastructure-as-Code (IaC) toolbox.

Obviously, there are several key elements (ie. how Feature files work (and how to author them), output options, Terraform version references/controls, etc.) that are currently lacking in the documentation. However, I have been in touch with the author, and he has agreed to add these missing elements to the documentation, to improve the learning/use experience of this tool.

And don’t forget, if you start to author your own Feature tests, think about contributing them back to the community for others to use as examples.


Recently, I have written several DevOps related articles. Namely:

This article is the 3rd one on a similar topic, but specifically focuses on the GitHub Super-Linter.

The GitHub Super-Linter is a simple combination of various linters (41 at the time of this writing), written in bash, to help validate your source code.

Quality Checks for Terraform

The GitHub Super-Linter is actually unique, in that, it covers quite a lot of languages.

At the time of this writing, it has coverage for (to name a few):

  • Azure Resource Manager (ARM) templates
  • AWS CloudFormation templates
  • Docker
  • JSON
  • Markdown
  • PowerShell
  • Terraform
  • YAML

When you look at the documentation for the Super-Linter, you will see that we can run the linter locally through a Docker container. Here is the command you can use to do so:

docker run -e RUN_LOCAL=true -v /path/to/local/codebase:/tmp/lint github/super-linter

So, to use this in our Azure DevOps pipeline, we can simply do this…

Azure Pipeline code running GitHub Super-Linter Docker container

Azure Pipeline code running GitHub Super-Linter Docker container

When you run this in the Azure pipeline, this is the type of output you would see. Notice that the execution exits with a non-zero exit code if a potential problem is detected. This enables us to be able to use it in a CI/CD pipeline and exit/error-out, as we would expect to.

Azure Pipeline displaying GitHub Super-Linter Exit Code

Azure Pipeline displaying GitHub Super-Linter Exit Code

If you’ve read the other 2 articles I wrote, you know that we’re trying to extract this output and publish it as test results in the pipeline, so let’s do that.

Publish the Results

In Azure Pipelines, the Publish Test Results task is used to publish test results. But how do we get the results out of the container, and into one of the supported results formats?

Here’s what worked for me…

mkdir super-linter.report
docker pull github/super-linter:latest

docker run -e RUN_LOCAL=true -e OUTPUT_DETAILS=detailed -e OUTPUT_FORMAT=tap -v $(System.DefaultWorkingDirectory):/tmp/lint --name outputDir github/super-linter:latest
docker cp outputDir:/tmp/lint/super-linter.report $(System.DefaultWorkingDirectory)/GHLinterReport

Here, we are telling the GitHub Super-Linter Docker container, that we want detailed output, and the output format to be TAP (which is the only supported format). We are also copying the report from the Docker container (the docker cp command), into a directory that the pipeline has access to.

In case you have not had the opportunity to read the other DevOps articles I’ve shared recently, we are using the Azure DevOps Publish Test Results task. This task supports specific formats (namely CLI, JSON, JUnit XML). So, it does not support the TAP format. This means we need to do a little more work first.

Time to convert

In order to publish and visualize the test results, we need to take the resulting TAP file(s), and convert them to a usable format.

Knowing that the Publish Test Results task supports the JUnit format, I Googled “convert tap to junit” and came across the NPM Tap-JUnit formatter. I also came across R2Devops’s Super_Linter example of converting the GitHub Super-Linter report to publish in Azure Pipelines.

Using these as reference examples, this is the script task that worked for me. The primary lines are the following:

tap-junit --pretty --suite TFLint --input super-linter-TERRAFORM.tap --output $(System.DefaultWorkingDirectory)/Converted --name TFLint-Report.xml

tap-junit --pretty --suite TerraScan --input super-linter-TERRAFORM_TERRASCAN.tap --output $(System.DefaultWorkingDirectory)/Converted --name TerraScan-Report.xml

Now in my case, I knew that I was targeting Terraform for the test results. The TAP report that is produced is labelled “super-linter-TERRAFORM.tap” and “super-linter-TERRAFORM_TERRASCAN.tap” respectively. If you’re curious how to identify that, you can use the PublishBuildArtifacts task to capture the raw file output as a pipeline artifact. Alternatively, you could make the TAP-JUNIT call into a loop, and more dynamic if needed.

Note: The SED commands were taken from the R2DevOps example.

Azure Pipeline - GitHub Super-Linter - Convert TAP Script

Azure Pipeline – GitHub Super-Linter – Convert TAP Script

Publishing the results

Now that we have the report(s) converted to JUnit format, we can use the Publish Test Results task and see the results. Because the GitHub Super-Linter has multiple linters that it executes, there are multiple reports available, and subsequently that you need to convert.

In my case, I created two explicit Publish Test Results tasks. You may wonder why. First, so that I could point to each specific converted XML file. But also, notice how I have included the ‘testRunTitle‘ field in the task.

Azure Pipeline - GitHub Super-Linter - Publish Test Results

Azure Pipeline – GitHub Super-Linter – Publish Test Results

When we look at the Test Results in the pipeline, notice how each test result is grouped by the Title. This way, I can easily see what tests passed/failed for TFLint, and TerraScan respectively; versus seeing one giant mixed list.

Azure Pipeline - GitHub Super-Linter - Test Results

Azure Pipeline – GitHub Super-Linter – Test Results

Conclusion

In this example, we were able to use the GitHub Super-Linter, and take the resulting TAP report, convert it to JUnit XML, and publish that as Test Results to the Azure pipeline! There was a few extra steps needed, but this allows us to have a clear and clean Test Report.

If you have any other ideas or suggestions for similar articles around DevOps, Infrastructure-as-Code (IaC), Terraform, etc. let me know!


Previous, I published a blog article on Publishing TFSec Terraform Quality Checks to Azure DevOps Pipelines. Continuing on the topic of working with DevOps, and performing quality checks on our Infrastructure-as-Code (IaC), this article will be similar, but focused on using Checkov.

Checkov is a static code analysis tool for infrastructure-as-code, published and maintained by BridgeCrew. It detects security and compliance misconfigurations in various templating languages including Terraform, Azure Resource Manager (ARM), and CloudFormation, among others.

Quality Checks for Terraform

Similar to the previously mentioned article, I have been focusing lately on DevOps and Infrastructure-as-Code (IaC), and in particular, HashiCorp Terraform. One of the tools that I am using to perform quality checks against my Terraform templates, is BridgeCrew’s Checkov.

At the time of this writing, Checkov has:

  • 181 x AWS checks
  • 106 x Azure checks
  • 67 x GCP checks
  • 142 x Kubernetes checks

When you look at the documentation for Checkov, you will see that, aside from installing it locally, you can also run Checkov in a Docker container. Here is the command you can use to do so:

docker run --tty --volume /directory-to-terraform-files:/tf bridgecrew/checkov --directory /tf

To run this in an Azure DevOps pipeline, this is what the Job looks like…

Azure Pipeline code running Checkov Docker container

Azure Pipeline code running Checkov Docker container

When you run this in the Azure pipeline, this is the type of output you would see. Notice that the execution exits with a non-zero exit code if a potential problem is detected. This enables us to be able to use it in a CI/CD pipeline and exit/error-out, as we would expect to.

Azure Pipeline displaying Checkov Results Output

Azure Pipeline displaying Checkov Results Output

Azure Pipeline displaying Checkov Exit Code

Azure Pipeline displaying Checkov Exit Code

That’s all great, but our goal is to use this in a DevOps CI/CD pipeline. And we want to be able to consume the results in the pipeline (not just from the terminal).

Publish the Results

In Azure Pipelines, the Publish Test Results task is used to publish test results. But how do we get the results out of the container, and into one of the supported results formats?

Here’s what worked for me…

docker run --tty --volume $(System.DefaultWorkingDirectory):/tf bridgecrew/checkov --directory /tf --output junitxml > $(System.DefaultWorkingDirectory)/Checkov-Report.xml

Notice a few things in this revised docker run command. First, notice the –output JUnitXML parameter. The Checkov Results documentation shows that we can request different output types by using this –output parameter. The supported formats are: CLI, JSON, JUnit XML. Knowing that the Azure DevOps Publish Test Results task supports JUnit as a test result format, that is what we are targeting.

Next, you’ll notice that, in order to retrieve the output from the docker container into pipeline, we are piping the StdOut to a file within the working directory of the pipeline (the ‘>‘ part).

After we get the output from the docker container, we can now use the Publish Test Results task, and publish the results to the pipeline.

Azure Pipeline code Publishing Checkov Test Results

Azure Pipeline code Publishing Checkov Test Results

A Little More Work Required

While that approach may seem simple and straightforward, you’re about to discover a few gotchas.

First, if Checkov has any issues with resolving anything in your Terraform code (like, say, a module reference), the terminal will show these as Warnings (as depicted below).

Checkov Warnings Output Example

Checkov Warnings Output Example

The problem is, these Warnings will also appear in the converted/produced XML file.

Checkov Warnings in the XML File

Checkov Warnings in the XML File

So… when the Azure DevOps pipeline runs the Publish Test Results task, it throws this error:

##[warning]Failed to read /home/vsts/work/1/s/junit.xml. Error : Data at the root level is invalid. Line 1, position 1..

Here’s what the error looked like in the pipeline output:

Checkov Test Results - Error on First Line

Checkov Test Results – Error on First Line

So, let’s assume you’ve addressed these Warnings, and re-run the scan. In my case, the above mentioned error no longer occurred! But wait, there’s more!

This time, there was a new error. One about an invalid hexadecimal character, and on a different line.

##[warning]Failed to read /home/vsts/work/1/s/junit.xml. Error : ' ', hexadecimal value 0x1B, is an invalid character. Line 9, position 1..

Here’s what the error looked like in the pipeline output:

Checkov Test Results - Error on Last Line

Checkov Test Results – Error on Last Line

When you download and look at the resulting XML file, you will notice that at the end of the file, there are 2 additional lines:

 [0m [0m [0m
 [0m [0m [0m [0m [0m
Checkov Invalid Hexadecimal Character in the XML File

Checkov Invalid Hexadecimal Character in the XML File

This causes issues with the Publish Test Results task from publishing the results in the pipeline.

Digging Deeper

If you dig a little deeper into the Checkov code, you will find in the Python script how they generate the XML file output. Now, I don’t know Python myself, but I did encounter the same hexadecimal value error with another Terraform scanning tool that I’m using as well (namely TFSec). But when I used the same method that I am using for Checkov (the same StdOut redirect), the XML file (from TFSec) is rendered correctly and published successfully.

So, that made me think that the issue is with how Checkov is generating the XML file, and may be related to the parser or formatter. For reference, the TFSec product is using the following JUnit Schema.

The Good News

So the good news is, I was able to get in touch with the BridgeCrew Checkov team, and share with them what I was trying to accomplish, my challenges, and my findings. At the time of writing this article, there is still an issue with the XML output. But wait…

The Even Better News

While working through a similar Publish Test Results issue for another tool (the GitHub Super-Linter, which I will post about in a separate article), I was able to come up with a work-around!

Do you remember what the issue was, after resolving the ‘Warnings’ in the XML file? It was the last 2 lines in the file that were causing an issue. So, after some research, I realized that, all I have to do (at least, until the output is officially fixed), is to remove those last 2 lines. The rest of the XML file looked fine.

And so, I added a script after capturing the converted output into JUnitXML, and ran sed -i ‘$d’ to remove the 2 error-throwing lines. Now, I am not a Linux person, so let me break it down for anyone else in the same situation.

I Googled “sed command delete line” and came across this article: Unix Sed Command to Delete Lines in File – 15 Examples. In it, it explains (Example 2) that the command sed ‘$d’ file is used to  remove the footer line in a file. The $ indicates the last line of a file. OK, now we’re getting somewhere.

At the end of Example 15 in the article, it stated:

Note: In all the above examples, the sed command prints the contents of the file on the unix or linux terminal by removing the lines. However the sed command does not remove the lines from the source file. To Remove the lines from the source file itself, use the -i option with sed command.

And so, by adding the -i along with the ‘$d’ in the command, I was able to successfully remove the last line in the XML file. But, since there were 2 lines that are causing the issue, I had to run it twice.

Checkov - Test Results - Line Removal Script

Checkov – Test Results – Line Removal Script

Pulling It All Together

All that was left is to execute the Publish Test Results task.

Checkov - Publish Test Results Task

Checkov – Publish Test Results Task

And now, since the XML file is formatted correctly, Azure Pipelines can read the results properly, and we get the following results:

Checkov - Test Results Published

Checkov – Test Results Published

Conclusion

Well, even though the general approach is similar to what I had to do for TFSec (as detailed in my Publishing TFSec Terraform Quality Checks to Azure DevOps Pipelines article), as you can see, there were a few additional bumps in the road.

I can also happily state that I am working with the BridgeCrew team, as we collectively look for a more permanent solution. But for now, at least the work-around is an option. With this work-around we can successfully publish our Checkov test results to the Azure Pipeline.

Bonus: Stay tuned for another similar article about the GitHub Super-Linter!

Update (now with less ‘-t’)

While writing this blog post, and working with the BridgeCrew team, they were able to discover that the command to run Checkov in a Docker container (the docker run -t part) was causing the two additional lines in the XML output.

So what this means is, you don’t have to run the SED commands after receiving the converted XML output from Checkov, and the new/revised Docker command is this:

docker run --volume $(System.DefaultWorkingDirectory):/tf bridgecrew/checkov --directory /tf --output junitxml > $(System.DefaultWorkingDirectory)/Checkov-Report.xml

From there, you can publish the results by using the Publish Test Results task, as was previously mentioned!

Kudos to the great work by the BridgeCrew team, their transparency, and their efforts to help and contribute to the community!

Tag Cloud