Skip to main content

Trust, but verify: Automating AMI creation and validation with HashiCorp Packer & Chef InSpec

As we began moving apps up into AWS we were challenged by the security team to “harden” the AMI used for our ec2 instances, we were already thinking of using HashiCorp Terraform for infra as code, so HashiCorp Packer was an obvious choice and while not always elegant it has done a good job.

Packer uses the concepts of builders and provisioners to spin up (builder) an ec2 instance, and run scripts (provisioners) to configure resources. Once complete the instance is shut down and an AMI is “baked” from the instance. We build multiple AMIs that inherit from each other so an image that has our minimum configuration applied is used as the base for other AMIs that in turn install and provision additional resources, rinse and repeat. To make things simple Jenkins polls our packer repository for changes and kicks off packer builds as soon as commits are merged to master.

At the beginning of our journey there were many changes to our ‘base’ image, and multiple sets of eyes on those builds as it was hardened and configured. Over time the configurations stabilized and updates happened infrequently which meant our AMIs got a bit stale - OS patches piled up and we realized that all our AMIs needed to be built on a schedule. Jenkins has the option to build a job on a schedule, but with infrequent changes to the packer configurations there were less eyes on the packer build process - if something failed in a way that didn’t cause packer to fail we could produce AMIs with defects that would go unnoticed, or worse - noticed by customers.

Building AMIs frequently automates us out of the OS patch management business but instances need to be terminated and configurations need to be updated to take advantage of the new AMIs - but what if a defect was introduced during a packer build, something sneaky enough that the build didn’t fail yet damaging enough that an application wouldn’t be able to start or communicate using an updated AMI? Since the processes to build the AMI and restart the applications are all automated it’s easy to imagine coming in to work and discovering multiple applications in boot loops failing health checks.

Packer will fail a build and not produce an AMI if a command exits with a non-zero status. But what if packer is just copying iptables rules into place? Easy enough to foul one up and realize the mistake later when applications are unable to communicate so they are stuck in ALB health-check reboot land, causing application outages and wasting money.

Chef InSpec to the rescue - “InSpec is an open-source testing framework for infrastructure with a human-readable language for specifying compliance, security and other policy requirements.”. While I imagine InSpec is used primarily to test security configurations I decided it would be well suited for acceptance testing as well. Integrating InSpec with Packer I can verify that the configurations applied (builders) by Packer are actually getting … applied, and that I can guarantee an AMI built with our process meets a base level of acceptance. For example if I enable selinux I would like to test that when an ec2 instance is spun up that selinux is actually enabled, if docker is installed I would like to verify that the configuration applied produces the expected results using the docker binary. This goes beyond checking the configuration files, it’s better if a process can start/run and respond so that configurations can be validated.

Lastly I didn’t want to install anything on the ec2 instance to run the acceptance test, and ideally I want to install the minimum of things on the instance being used to run the packer build / InSpec tests.

Ok, enough rambling, lets get into the details.



Creating an AMI with Packer

First a directory structure was created with a base directory containing a common variable file and a bash script that will kick off a packer build, sub-directories are named for the image that will be built and contain the .json file which defines the builders and provisioners used by Packer

  • basedir:
    • common-vars.json
    • packer-build.sh
  • basedir/base:
    • base.json
    • config.sh
    • other-config.sh
  • basedir/secondary:
    • secondary.json
    • config.sh



packer-build.sh

This script is run by Jenkins (or locally for testing purposes) and takes a single argument that defines which image should be built, so packer-build.sh base would create an image named base, using files in the base directory. We use Amazon Linux as our base image, but this could easily be tailored to suit your needs by updating the amazon_linux_amilookup.

    #!/bin/bash
    # this set line is important so that jenkins will mark the build as "failed" appropriately
    set -euo pipefail

    # for the cascading images we'll need to reference which AMI to use as a starting point
    function amilookup {
        aws ec2 describe-images --filters Name=tag-key,Values=Name Name=tag-value,Values=$1 --region us-west-2 --query Images[*].[ImageId,CreationDate] --output text | sort -k 2 | tail -1 | cut -f 1
    }

    # we need to find the most recent amazon linux image - this requires some 'aws ec2' magic
    function amazon_linux_amilookup {
        aws ec2 describe-images --owners amazon --filters "Name=root-device-type,Values=ebs " "Name=virtualization-type,Values=hvm" --region us-west-2 --query Images[*].[ImageId,CreationDate,Name] --output text | grep "amzn-ami-hvm.*x86_64-gp2" | sort -k 2 | tail -1 | cut -f 1
    }

    # for use with inspec
    export image_name="$1"

    export AMI_AMAZON_LINUX="$(amazon_linux_amilookup)"
    export AMI_BASE=$(amilookup base)
    export AMI_SECONDARY=$(amilookup secondary)

    cd $1

    # making an assumption that if you are ec2-user that we are on
    # an ec2 instance and the packer command should be in your path.
    if [[ $USER = "ec2-user" ]] ; then
      packercmd=packer
    else
      packercmd=~/bin/packer
    fi
    command -v $packercmd >/dev/null 2>&1 || { echo >&2 "I require packer but it's not installed.  Aborting."; exit 1; }

    # finally the magic is happening... notice the pipe to tee, that's important for InSpec
    $packercmd build -machine-readable -var-file ../common-vars.json $1.json | tee build.log


common-vars.json

Here we define some variables that will be referenced in all our images, to make ssh connections easier we are using an ssh key that is already populated in AWS.

    {
        "region": "us-west-2",
        "instance_type": "t2.micro",
        "ssh_username": "ec2-user",
        "ssh_keypair_name": "ssh_key_name",
        "ssh_private_key_file": "../inspec_files/ssh_key_name",
        "ssh_pty": "true",
        "vpc_id": "vpc-xxxxxxxxx",
        "subnet_id": "subnet-xxxxxxxxx",
        "security_group_id": "sg-xxxxxxxxxx",
        "iam_instance_profile": "YourInstanceProfile",
        "ami_users": "123456,789123,456789"
    }


base/base.json

This file defines the builder (amazon-ebs), and then the provisioners (only using shell so far).

    {
        "variables": {
          "type": "",
          "region": "",
          "instance_type": "",
          "ssh_username": "",
          "vpc_id": "",
          "subnet_id": "",
          "security_group_id": "",
          "iam_instance_profile": "",
          "base_ami": "{{ 'env `AMI_AMAZON_LINUX`' | withMustaches }}",
          "ami_users": ""
        },
        "builders": [{
          "type": "amazon-ebs",
          "region": "{{ 'user `region`' | withMustaches }}",
          "source_ami": "{{ 'user `base_ami`' | withMustaches }}",
          "instance_type": "{{ 'user `instance_type`' | withMustaches }}",
          "ssh_username": "{{ 'user `ssh_username`' | withMustaches }}",
          "ssh_keypair_name": "{{ 'user `ssh_keypair_name`' | withMustaches }}",
          "ssh_private_key_file": "{{ 'user `ssh_private_key_file`' | withMustaches }}",
          "ami_name": "base.{{ 'timestamp' | withMustaches }}",
          "vpc_id": "{{ 'user `vpc_id`' | withMustaches }}",
          "subnet_id": "{{ 'user `subnet_id`' | withMustaches }}",
          "security_group_id": "{{ 'user `security_group_id`' | withMustaches }}",
          "iam_instance_profile": "{{ 'user `iam_instance_profile`' | withMustaches }}",
          "ami_users": "{{ 'user `ami_users`' | withMustaches }}",
          "run_tags": {
            "Name": "packer-builder:dev",
          },
          "tags": {
            "Name": "base",
            "os": "linux"
          }
        }],
        "provisioners": [
          {
            "type": "shell",
            "execute_command": "chmod +x {{ '.Path' | withMustaches }}; {{ '.Vars' | withMustaches }} sudo -E -S bash -x '{{ '.Path' | withMustaches }}'",
            "script": "config.sh"
          },
          {
            "type": "shell",
            "execute_command": "chmod +x {{ '.Path' | withMustaches }}; {{ '.Vars' | withMustaches }} sudo -E -S bash -x {{ '.Path' | withMustaches }}",
            "script": "other-config.sh"
          }
        ]
    }


config.sh & other-config.sh

These are simple shell scripts that run commands to configure the instance - you would write these to comply with your business requirements to setup things like docker, selinux, logging configuration, etc.


Running the packer-builder.sh script

Now that you have the directory structure and scripts installed, and assuming you have packer installed and IAM configured you can manually run the packer-builder.sh script or have it triggered in your build tool of choice, either way it takes a single argument that corresponds to the image you want to build, assuming everything works you should end up with an AMI built to your spec.


Integrating InSpec tests

As discussed earlier building an AMI is really only half the battle, acceptance testing is important to make this fully automated process run smoothly. There are a couple hints in the packer-build.sh script and common-vars.json that help setup our environment for InSpec testing, like exporting the name of the image we want built, and the ssh keypair and username. Additional work needs to be done to discover the IP address of the ec2 instance since Packer doesn’t expose this data. First you’ll need some additional provisioners added to your image.json files:

    {
      "type": "shell",
      "execute_command": "chmod +x {{ '.Path ' | withMustaches }}; {{ '.Vars ' | withMustaches }} sudo -E -S bash -x {{ '.Path ' | withMustaches }}",
      "expect_disconnect": "true",
      "script": "../inspec_files/reboot.sh"
    }

We need to restart the instance since any changes we made in our config.sh scripts may not take effect until boot time - things like services starting up automatically or selinux getting enabled. Packer will happily wait while the instance reboots and pick up where it left off.

Next you’ll need to actually run the InSpec test, here we are using the shell-local provisioner which runs scripts on the packer builder node

    {
      "pause_before": "60s",
      "type": "shell-local",
      "command": "bash ../inspec_files/inspec_tests.sh"
    }

Notice the pause_before - InSpec is much less tolerant of connection timeouts so we just wait patiently while the node comes back online, then run the InSpec tests.

Directory structure for InSpec looks like this:

  • inspec_tests
    • base.rb
    • inspec_tests.sh
    • secondary.rb
    • ssh-key-name
    • ssh-key-name.pub
    • reboot.sh


base.rb

This file defines what acceptance tests (InSpec tests) you are going to run against the instance after it has been rebooted. Note the name of this file (base.rb) matches the name of the directory (in this case base/) that we are building - each build will require a similarly named .rb file containing InSpec tests.

    describe command('getenforce') do
      its('exit_status') {should eq 0}
      its('stdout') { should eq "Enforcing\n" }
    end

This is just a sample test - you’d want to write tests for as many services as you have configured, refer to the InSpec documentation and tutorials for what you can test (hint: all the things).


inspec_tests.sh

This file is the real MVP… using a function to define a docker command is the key to avoiding installing InSpec on the ec2 instance being tested as well as the ec2 instance packer is running from - in our case the nodes running the packer build already happen to be running docker so all that’s required is pulling an image.

Earlier on in the packer-build.sh script you may have noticed we pipe the output through tee to build.log, because Packer doesn’t expose the IP of the ec2 instance we scrape build.log for the instance ID and then ask AWS what the IP is, it’s a little convoluted but was the cleanest method I could come up with. With the InSpec function defined, the IP address exposed, and Packer aware of the ssh user and key used to authenticate we can simply run InSpec using the test file that matches the name that was passed to packer-build.sh, any failure in InSpec will cause packer to fail, which in turn will cause Jenkins to fail the build - if you cover enough code with InSpec tests you can provide some semblance of a guarantee that the AMI has been built to spec and will provide all the services necessary when it is spun up!

    #!/bin/bash
    if [[ -z ${image_name} ]] ; then echo "'image_name' not set, exiting" ; exit 1 ; fi

    echo "preparing inspec environment"
    docker pull chef/inspec
    inspec () {
      docker run --rm -v $(pwd)/../inspec_files/:/share:Z chef/inspec $1
    }

    # get ip address of instance, then we can use the private key in this directory
    # and run inspec against the instance
    instanceIP=$(aws --region us-west-2 ec2 describe-instances --output text --instance-id=$(awk -F[\(\)] '/Waiting for instance/{print $2}' ../${image_name}/build.log) --query "Reservations[*].Instances[*].PrivateIpAddress")
    if [[ -n "$instanceIP" ]] ; then
      echo "Running inspec tests:"
      inspec "exec ${image_name}.rb -i packer_builder_rsa --sudo -t ssh://ec2-user@${instanceIP}"
      if [[ $? -ne "0" ]] ; then exit 1 ; fi
    else
      echo "Unable to determine instance IP - exiting!"
      exit 1
    fi
    if [[ -f ../${image_name}/build.log ]] ; then rm ../${image_name}/build.log ; fi


secondary.rb

Tests for a different image (named ‘secondary’ for this example), in our case we have a base image and then images that use the base and apply additional configuration, I have chosen not to duplicate tests in these files, but only to test for additional services that have been configured.


ssh-key-name & ssh-key-name.pub

Because Packer does not expose the ssh connection information without running in debug mode we need to define a static ssh-key instead of using the packer-generated keys. I store these in the InSpec folder to keep them in a central location that can be mounted in the InSpec docker container without additional mount options.


reboot.sh

It uh, reboots the ec2 instance - Rebooting the instance prior to running InSpec tests ensures that all services and configurations are applied, if a kernel has been installed it should now be running etc.

    #!/bin/bash -e

    echo "rebooting..."
    /sbin/reboot
    sleep 10


Complete packer json example

Here is a complete sanitized version of our base build json file used by packer, note that we remove the ssh key used for the build process at the very end - whether you use a statically defined key or a dynamically generated key this is an important step that is easy to miss!

    {
      "variables": {
        "type": "",
        "region": "",
        "instance_type": "",
        "ssh_username": "",
        "vpc_id": "",
        "subnet_id": "",
        "security_group_id": "",
        "iam_instance_profile": "",
        "base_ami": "{{ 'env `AMI_AMAZON_LINUX`' | withMustaches }}",
        "ami_users": ""
      },
      "builders": [{
        "type": "amazon-ebs",
        "region": "{{ 'user `region`' | withMustaches }}",
        "source_ami": "{{ 'user `base_ami`' | withMustaches }}",
        "instance_type": "{{ 'user `instance_type`' | withMustaches }}",
        "ssh_username": "{{ 'user `ssh_username`' | withMustaches }}",
        "ssh_keypair_name": "{{ 'user `ssh_keypair_name`' | withMustaches }}",
        "ssh_private_key_file": "{{ 'user `ssh_private_key_file`' | withMustaches }}",
        "ami_name": "base.{{ 'timestamp' | withMustaches }}",
        "vpc_id": "{{ 'user `vpc_id`' | withMustaches }}",
        "subnet_id": "{{ 'user `subnet_id`' | withMustaches }}",
        "security_group_id": "{{ 'user `security_group_id`' | withMustaches }}",
        "iam_instance_profile": "{{ 'user `iam_instance_profile`' | withMustaches }}",
        "ami_users": "{{ 'user `ami_users`' | withMustaches }}",
        "run_tags": {
          "Name": "packer-builder:dev",
      },
      "tags": {
          "Name": "base",
          "os": "linux"
      }
      }],
      "provisioners": [
        {
          "type": "shell",
          "execute_command": "chmod +x {{ '.Path ' | withMustaches }}; {{ '.Vars ' | withMustaches }} sudo -E -S bash -x '{{ '.Path ' | withMustaches }}'",
          "script": "config.sh"
        },
        {
          "type": "shell",
          "execute_command": "chmod +x {{ '.Path ' | withMustaches }}; {{ '.Vars ' | withMustaches }} sudo -E -S bash -x '{{ '.Path ' | withMustaches }}'",
          "script": "additional_config.sh"
        },
        {
          "type": "shell",
          "execute_command": "chmod +x {{ '.Path ' | withMustaches }}; {{ '.Vars ' | withMustaches }} sudo -E -S bash -x '{{ '.Path ' | withMustaches }}'",
          "expect_disconnect": "true",
          "script": "../inspec_files/reboot.sh"
        },
        {
          "pause_before": "60s",
          "type": "shell-local",
          "command": "bash ../inspec_files/inspec_tests.sh"
        },
        {
          "type": "shell",
          "inline": [ "rm /home/ec2-user/.ssh/authorized_keys" ]
        }
      ]
    }


Wraping it all up

Creating the structure to test our AMIs was not trivial. There are a fair amount of moving pieces and small gotchas that completely fail builds, and troubleshooting can be complicated when you run this through a build system such as Jenkins. Hopefully the method and scripts provided above can help you along your journey to automate and test with Packer and InSpec!


References