Security and compliance of container images is a trending hot topic. The challenge discussed in this article is how to integrate a container vulnerability scanning solution within a CI/CD process. Ideally, the outcome of such a pipeline would be a secure golden image that meets the security and compliance requirements of your company and at the same time contains all the software packages that the application teams need.
- We took a look at several options on the market and picked Clair for the following reasons:
- Clair was developed by the CoreOS team, now part of RedHat, thus backed by a major open source contributor and vendor.
- Clair has been out there for a while and there is a rich eco-system of tools that use the Clair API. It is quite easy to be integrated into a CI/CD process.
Figure 1 Clair Container Scanner
The solution is shown in Figure 1. It is worth mentioning that Clair follows a traditional client-server architecture, where the Clair server is an API hosted on a container and the clients are third party tools that send commands to the API. The solution contains the following key components: [NB1]
- Clair container – this is a container that hosts the scanning API. Clients need to make API requests to initiate any action (including image scanning). In our implementation, the Clair container has been slightly modified and hosted on AWS Fargate. It is a separate task in an autoscaling group. Details about the implementation are given below.
- PostgreSQLDB[NB2] – this is a vulnerability database that is being regularly updated with vulnerabilities for both OS and non-OS components of the container.
- Clients – these are tools that implement simple CLI commands to use the Clair functionality. The CLI commands trigger API requests to the Clair API. While there is a variety of options for client solutions – Clair Klar has been chosen for its simplicity and functionality. In the post, we execute Klar CLI commands from a machine in the VPC. However, it is possible to install the tool as part of the CI/CD process – as an AWS CodeBuild project for example.
- Application [DR3] Load Balancer – an ELB is used to present a consistent DNS record in front of the Clair container hosted on AWS Fargate, it is also easy to integrate the ELB with AWS Certificate Manager, and apply a free SSL certificate to make encrypted calls to the service.
1. Create an RDS database
Figure 2 Clair DB deployment
Clair requires a functioning database instance. While the example presented in the official page creates a PostgreSQL container, it is generally recommended to have a separate PostgreSQL instance installed. In this example, we make use of a managed RDS instance.
In our experience, a small t2 instance with 10 GB of storage is sufficient for smooth operation. The database should be however monitored.
Figure 3 Clair DB Settings
2. Prepare the container image
The Clair container image is public. We recommend pulling the latest STABLE release.
NOTE: If you specify “image:latest” there is no guarantee that the release will be stable. The exact release must be explicitly specified – in our case this is v2.0.7. In the example, we use an ec2 instance (a medium t2 as some RAM is needed to test) with IAM permissions to upload an image to an ECR.
So, here’s what should be done:
2.1. Install Clair
First, let’s install docker simply by issuing the following command:
$ yum install docker
Next, we download the Clair container image. At the time of writing this article, the latest stable release is v2.0.7. Note that if you download the latest development release, the tools might not work.
$docker pull quay.io/coreos/clair:v2.0.7
$yum install python-pip
In order to build an image that will work on AWS Fargate, we should download and modify the docker compose file:
$ curl -L https://raw.githubusercontent.com/coreos/clair/master/contrib/compose/docker-compose.yml -o $HOME/docker-compose.yml
$ mkdir $HOME/clair_config
Then we download the configuration file:
curl -L https://raw.githubusercontent.com/coreos/clair/master/config.yaml.sample -o $HOME/clair_config/config.yaml
Edit the configuration file:
The important part is to update the file with the created PostgreSQL DB string. In our case this would be:
# PostgreSQLSQL Connection string
source: PostgreSQLsql://clairdb:Password123@clair_db_uri/clairdb #clairdb:Password123 – this is username and password of the PostgreSQL database
#clairdb after the ‘/’ is the name of the database schema.
NOTE: In this example for simplicity, we are hardcoding the password in the configuration file which is generally a bad practice. To solve this, when implementing in a production environment, the sensitive information can be retrieved from the SSM Parameter Store and passed to the container in Fargate as an environment variable.
Next, we need to install docker compose through the following PIP command:
$ pip install docker-compose
Before running docker compose, we should modify the docker compose file we downloaded in the follow manner:
command: [-config, /config/config.yaml]
Finally, we launch docker compose:
$ docker-compose -f $HOME/docker-compose.yml up -d
…and check whether the container is running:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42616dda4906 quay.io/coreos/clair:v2.0.7 “/clair -config /con…” 23 hours ago Up 6 seconds 0.0.0.0:6060-6061->6060-6061/tcp clair_clair
2.2. Install and test Clair Klar
$ yum install go
$ go get github.com/optiopay/klar
$ find -name “*klar*” ====> ./go/bin/klar
$ export PATH=$PATH:/root/go/bin/
$ export CLAIR_ADDR=localhost
$ export CLAIR_OUTPUT=Low
Now, we can test if the Clair API works:
$ klar PostgreSQLs:9.5.1
NOTE: In this case we pull an image from docker.io. In reality, any container repository can be used such as Amazon ECR.
Found in: shadow [1:4.2-3+deb8u1]
An issue was discovered in shadow 4.5. newgidmap (in shadow-utils) is setuid and allows an unprivileged user to be placed in a user namespace where setgroups(2) is permitted. This allows an attacker to remove themselves from a supplementary group, which may allow access to certain filesystem paths if the administrator has used “group blacklisting” (e.g., chmod g-rwx) to restrict access to paths. This flaw effectively reverts a security feature in the kernel (in particular, the /proc/self/setgroups knob) to prevent this sort of privilege escalation.
Found in: zlib [1:1.2.8.dfsg-2]
inffast.c in zlib 1.2.8 might allow context-dependent attackers to have unspecified impact by leveraging improper pointer arithmetic.
NOTE: The severity of the Common Vulnerabilities and Exposures
(CVEs) that Clair reports are configurable. A user can also define passing criteria and whitelists. More information about the configuration options can be found here.
2.3. Prepare the image for AWS Fargate
First, we create a new image based on the running container:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42616dda4906 quay.io/coreos/clair:v2.0.7 “/clair -config /con…” 4 minutes ago Up 4 minutes 0.0.0.0:6060-6061->6060-6061/tcp clair_clair
$ docker commit 42616dda4906 fargate_clair_image:latest
Then we can verify whether the image is there:
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
fargate_clair_image latest 67c1aac60aeb 4 seconds ago 353MB
klar latest abb9667bcc83 4 days ago 353MB
quay.io/coreos/clair-git latest be456957ff54 6 days ago 52.6MB
quay.io/coreos/clair v2.0.7 7968a46189ae 2 months ago 353MB
In order to prepare the image to run in a standalone manner on Fargate, we need to create a new image that contains the configuration file. This can be achieved in many ways. For example:
$ cd ~/clair_config
$ nano Dockerfile
COPY config.yaml /config/config.yaml
Finally, we can build the image that we are going to use to launch the Fargate container service:
$ docker build -t fargate_clair:latest.
3. Create an ECR repo and upload the prepared working Clair image
Now, we are ready to create the repo:
Figure 4 ECR Repository
And … upload the image we prepared in the previous step:
$ (aws ecr get-login –no-include-email –region eu-west-1)
$ docker tag fargate_clair:latest 389211405395.dkr.ecr.eu-west-1.amazonaws.com/vulnerability_scan:v2.0.7
$ docker push 389211405395.dkr.ecr.eu-west-1.amazonaws.com/vulnerability_scan:v2.0.7
4. Create a load balancer
Figure 5 Application Load Balancer
5. Create the AWS Fargate setup
We will not go through all the details. The values given below have been tested to be working smoothly:
Prepare the Fargate task referencing the ECR URL. In our case this is: 389211405395.dkr.ecr.eu-west-1.amazonaws.com/vulnerability_scan:v2.0.7CPU Units: 1024 (or 1 vCPU)Memory: 2048 MBTask Execution Role: ecsTaskExecutionRole which is an AWS managed role
6. Create the Fargate cluster and service
Security group – must allow all the traffic out (including non-TCP). Must allow the traffic from the LB listener IP range in. Autoscaling – normally auto-healing configuration would be enough to create a service that is reliably utilized by a CI/CD process. However, if there is a need for parallel scanning of images, then the maximum number of containers should be increased.ECS Autoscaling Service metric – CPUNetworking – in our specific setup the containers are hosted in a private security zone that is not accessible from the internet. The load balancer has a listener in the DMZ on port 6060 forwarding to port 6060 in the backend. However, the health check in the target group is on another port (by default 6061).
7. Verify that service works
7.1. Check whether the task is running
Normally, the task should be in “RUNNING” status.
Figure 6 Fargate Running Container
Indications that something is wrong include:
Service is stuck in “ACTIVATING” state. This usually means that the container is being constantly restarted. In this case if would be helpful to take a look at the CloudWatch log corresponding to the respective task. If there is no log available, this typically means that the task is unable to pull the image which is an indication that the issue is related to networking or authentication with the repository. The CloudWatch logs of the task should indicate that the container successfully connects to and updates the vulnerability database which in our case is an RDS instance. If there are any sort of errors there, it is to be expected that the scanning of images would not be successful. By default, the vulnerability database is updated every 6 hours. However, this option is configurable by the user.
7.2. Verify whether the Klar tool is still able to scan using the Fargate Clair container
First, define the environment variable that identifies the Clair API:
$ export CLAIR_ADDR=http://load_balancer_uri:6060
Second, test Klar:
$ klar PostgreSQLs:9.5.1
The result should be the same as the result we got in section 2.2.
In conclusion, Clair has been identified as a suitable open source solution that can be easily integrated within a CI/CD pipeline. The article exemplifies a simple implementation that helps to address the container security problem in an automated manner. However, there are many options and solution customizations that are outside the scope of this article. Please get in touch if we can help with AWS solution designs or any other aspect of the AWS platform.