Calm Cloud Security - Containers and AWS ECR

0
0
|
Mar 28, 2022

Author Bio

Kyler Middleton is the Cloud Security Chick (well, officially the Cloud IAM Advocate) at IAM Pulse. She's part of our community team, and loves to connect with cloud operations and developers (read: that's you!). If you'd like to meet up with the community team and discuss your solutions, hacks, or how you'd like to help us elevate the practice of IAM, please reach out!

Links

Transcript

Hey all!

I hope you're all doing great. Welcome back to our series on Calm Cloud Security - where we learn the theory and practice of cloud security without any gatekeeping.

As I'm recording this video, it's almost seventy degrees out, after a long, long winter! As soon as I'm finished with this video I'm going to go soak up the sun.

In our last video, we talked about what an ECR does as well as some basic theory around containers. After all that lofty theory, I want to put it into practice, and I bet you do too! In this video we'll build a container image with docker, test it, and push it to an ECR.

Let's start in the AWS console. I've already authenticated and run a terraform apply of our github terraform code, so we have an ECR. You can see the name there is "our-new-ecr", with the resource address over on the right.

Let's click into the ECR and you can see there are no images. If there were any, each one would be listed here with size, date and other info

Let's look at the image we'll be building.

First let's look at our Dockerfile. Here it is in Visual Studio Code so we get the pretty colors.

On line 1, we're grabbing the latest version of ubuntu:20.04 from the docker hub online. Docker will cache this image locally so next time we build it'll be even faster

On line 4, we're copying a script called "docker_start.sh" into the image at the "." location, which means working directory of the container's disk. Since we didn't set a working directory, docker defaults to the disk's root. We'll look at that script in a second.

On line 5, we're setting that script to be executable, which we need since we'll be running it as our next step.

And on line 9, we set an ENTRYPOINT of that same script, which means when the container launches, this script is run. As soon as the script finishes, the container will shut down and self-destruct. That's totally normal and desirable.

Let's check out our entrypoint script:

On line 1 and 2, we tell this script to run using bash, and "set -e" which means to exit if there's an error.

On line 5, we check to see if MY_NAME variable is set to empty. If it is, we immediately send some custom error message and exit with an exit code of "1". In almost all software, an exit code other than 0 indicates something went wrong.

If we get past that check, then on line 12 and 13 we print out a few lines of response using the MY_NAME variable. Let's not dig into that too much yet - it'll be more fun when we actually build and run it!

Speaking of that, let's do it.

Let's switch to our terminal and build our docker image.

Right off, let's set some variables we'll use in our commands. First, a local docker name. Docker calls this a "repository", but this is all local, and you can use whatever name makes sense to you.

Then let's set a docker tag to use when we publish to our ECR. We'll use a really easy pattern - "latest". Latest is also assigned to images in an ECR by default if they don't have a tag.

Finally, let's actually build this container. Let's run docker build. The period means to look in the directory we're in, and the -t means a tag of the variable we set above of "hello-name". If you get an error here about failing to connect to docker make sure you have the docker service started!

Let's run "docker image ls" to look at the local docker images, and sure enough, we see our "hello-name" docker image built only a few seconds ago, awesome.

We built a docker image!

Let's test our docker image locally. The command to run a container is, intuitively, "docker run" and then the local image name.

When we do that, we see our custom error message reminding us that we forgot to set the name variable. Remember, from the entrypoint script? Checking to make sure we have all the variables we need to run successfully is a super common pattern, and custom error messages are even more best practice. Let's set the variable and build again.

We use the "-e" flag to set an environmental variable inside the launched container of MY_NAME=Kyler. Feel free to set this to your name and run it!

We get a different message, this one telling us that our container ran properly without issues, and it even politely said hello. Pretty cool.

Next up, let's push our image to the ECR!

We could hard-code the name of the ECR and the region name we built everything in, but we have direct access to terraform, and it can print these outputs, which is a really cool way to harvest info from Terraform, so let's do that instead.

We'll use the "terraform output" command to print some outputs from the configuration. Here's what that will look like.

Next let's run the same commands but trap the response as variables so we can use that info in other commands.

The way to prep our image to be pushed to a repository is to map the local image name to a remote ECR name with a tag. The "docker tag" command does the trick, like this.

And then run the "docker image ls" command to look at our images again. Now we can see what looks like a totally new image, but we didn't build a new image! If you look under "Image ID", you can see that this is the exact same image, but using a different "repository", or name.

Okay, let's push our image to the ECR! Or not! When we push, we get an auth denied message. Even though we have valid AWS credentials in the environmental variables of this terminal, our push fails. That's because the docker tool isn't able to use the AWS credentials to authenticate.

However, clever AWS engineers are prepared for this. First let's grab our region, then we can run "aws ecr get-login-password" to print a password out, and we'll use some neat bash to trap the response and pipe it right into a "docker login" command.

If you see "login succeeded" like we do, docker is now able to authenticate to the ECR, woot! Let's try that push again:

Okay, something is happening, it looks like it's pushing! Let's jump ahead to when it finishes, and we see a new error message which is very helpful - it says there is a IAM resource policy on our ECR that denies us from writing to it.

Let's switch to the AWS Console and find that ECR again, and pull up the "ECR Permissions" to find the resource policy. We do see a policy at the bottom that denied my user from putting an image into the ECR, oh no!

Let's fix it in the console. There is a graphical policy editor specific to just ECR, but I'm more comfortable editing the json directly, so let's hit "edit policy JSON" in the top right. Let's find the second stanza that is blocking my user from uploading an image to this ECR and delete it. Then let's hit save in the bottom right.

Let's switch back to our terminal and run the same command again. This time it'll succeed!

Let's spend 1 minute looking at our ECR to verify the new image is there.

When we click into the repository this time, we see the image we just pushed is now present. This ECR is internal, so any of our team can download it and run it themselves.

We can also see over on the right that AWS has already scanned this image and there are some vulnerabilities within it. Let's select details and we can see a run-down of all the vulnerabilities AWS has found.

We could take this information and go bug hunting, but that's beyond the scope of this video.

In this video we covered a lot of ground! We went over common docker and scripting patterns, build a docker image, and tested it. Then we figured out docker authentication into AWS ECR, and THEN fixed an ECR resource policy issue that blocked us from uploading an image.

We confirmed the image was present in the ECR and very briefly saw that AWS has scanned our image for vulnerabilities and we could start remediating them if we wanted.

Please find all the code we ran in the GitHub repo displayed on-screen.

I hope this has been helpful to you! You can find more of these videos at iampulse.com, and you can find me @KyMidd on Twitter if you have any feedback or questions, and I look forward to next time.

Thanks all!

Share this video

Tweet Share

    Get the IAM Pulse Check Newsletter

    We send out a periodic newsletter full of tips & tricks, contributions from the community, commentary on the industry, relevant social posts, and more.

    Checkout past issues for a sampling of the goods.