Having been preparing for the AWS Certified DevOps Professional Exam recently, I find it is a bit tricky and sometimes even obscure to set up a CICD pipeline to automate the AWS ECS Blue/Green Deployment, especially if you choose to use AWS CLI.
Failed to find a handy guideline with an example covering the whole picture, I thought it would be useful to share my learning experience and the mistake I made so you can avoid them.
In the following blog, I am going to share with you a demo of how to set up a three stages’ pipeline to automate the ECS Blue/Green Deployment with AWS CodePipeline. And yes, I use AWS CLI :)
Wait, why AWS CLI instead of AWS Management Console?
Well, I can’t emphasise enough how it is critical that you should ONLY use AWS CLI but not AWS Management Console never ever. If you use the Management Console you will miss half of the important things you need to know as AWS Management Console takes care of handling all dependencies by a few clicks of checkboxes and setting default values for required options.
AWS CodePipeline workflow (as per this demo)
Before we jump to the demo, let’s do a quick preview on how CodePipeline works in this demo. The Pipeline we set up in the demo consists of three stages Source, Build and Deployment, and those are corresponding to AWS components CodeCommit, CodeBuild and CodeDeploy.
- CodePipeline being notified by CloudWatch event of the source code change made in CodeCommit. CodePipeline kicks off the pipeline.
- Source code is fetched from the repo, and zipped on to S3 bucket. The zip file is called output artefact in CodeCommit and will be ingested as the input artefact in CodeBuild.
- CodeBuild picks the source code and starts building it as per the instructions provided in buildspec.yml.
- CodeBuild creates a docker image, pushes the image into your AWS ECR registry, and generates output artefact including three files: ImageDetail.json, appspec.yml and taskdef.json (I’ll explain what are those files for and how to use them later on) and places them in S3 bucket
- CodeDeploy takes those three files as input artefact. It takes the “taskArn” in taskdef.json and maps to “<TASK DEFINITION>” in appsec.yml, and then creates a new task definition for ECS service with the url of the newly built image in it.
- CodeDeploy uses this new task definition to create new Fargate containers and attaches them to the ECS service as replacement taskSet.
- Once the new tasks pass the health check of the Load Balancer, CodeDeploy starts shifting the prod traffic from the original taskSet to the replacement taskSet through the Load Balancer (Target Groups).
- Once 100% prod traffic is routed to the replacement taskSet, CodeDeploy waits a period of time, and eventually terminates the original taskSet from the ECS service.
- You can stop the deployment and rollback the traffic back on the original taskSet before it gets killed by CodeDeploy. CodeDeploy does the rollback by killing the replacement taskSets.
Input and output artefacts
I have mentioned artefacts in several places above. Artefact is a key concept in CodePipeline. Stages use input and output artefacts that are stored as zip files in S3 bucket (Those files are encrypted by default). The output artefact in the previous stage is ingested as input artefact in the next stage. That’s how CodePipeline coordinates actions in different stages.
Note: If you use CodeDeploy individually without setting up a pipeline, like creating a standalone CodeDeploy project, CodeDeploy does not use artefacts at all. It uses directly those files that you upload to S3 bucket.
Demo gets started…
Now I will walk you through all the steps how to set up the pipeline using AWS CLI. I am assuming that you have already fulfilled the pre-requisites:
- All related IAM roles, policies, and permissions are already in place
- The ECS service must use either an Application Load Balancer (ALB) or a Network Load Balancer (NLB ). In this demo, I use an ALB
- The ALB must have a listener that will take prod traffic
- An optional listener with a different port can be added in ALB, which is used for CodeDeploy to route test traffic
- Two target groups should be created in ALB, one for the blue taskSet, and another for the green taskSet
- Your repo with your source code is created in CodeCommit. The source code should include three files at its root directory: appspec.yml, buildspec.yml and taskdef.json
Steps:
- Create a ECS cluster, task definition and a service with Fargate
The service uses the deployment controller of “CODE_DEPLOY”
aws ecs create-cluster --cluster-name MythicalMysfits-Clusteraws ecs register-task-definition \
--cli-input-json file://./aws_cli/task-definition.jsonaws ecs create-service \
--cli-input-json file://./aws_cli/service-definition.json
2. Create a CodeDeploy application, and its deployment group
aws deploy create-application \
—- application-name MythicalMysfitsServiceaws deploy create-deployment-group \
--cli-input-json file://./cicd/ecs-deployment-group.json
3. Create a pipeline in CodePipeline with three stages
The deploy stage uses provider of “CodeDeployToECS”.
aws codepipeline create-pipeline \
-— cli-input-json file://./cicd/code-pipeline-ecs-bluegreen.json
That’s it :), now you have set up the pipeline and can test the pipeline by pushing some code commits. However, as you probably have noticed, the process above depends on the following important files:
- appspec.yml. Create manually and place at the root directory. It instructs CodeDeploy how to create replacement tasks (containers) with Fargate during deployment stage. It can also contains lifecycle hooks where you can add your scripts or lambda functions for different purposes, like to run test scripts or to deregister tasks from ALB (target group). Note: For TaskDefinition, do not change the
<TASK_DEFINITION>
placeholder text. This value automatically gets updated with “TaskDefinitionArn” when your pipeline runs. - buildsepc.yml. Create manually and place at the root directory. It contains instructions for CodeBuild to follow during build stage, like to create a docker image and then dock it on to ECR registry. Note: buildspec.yml must include these three files in its output artefact section: taskdef.json, appspec.yml and imageDetail.json.
artifacts:# Indicate that the created imagedefinitions.json file created on the previous linesfiles:
- imageDetail.json
- appspec.yml
- taskdef.json
- taskdef.json. Create manually and place at the root directory. It contains configurations for replacement taskSet during deployment stage. CodeDeploy uses those configurations to create a new task definition for ECS service, and to launch new tasks (containers). Note: AWS document says you don’t need to change the “IMAGE1_NAME” placement text in the image field in taskdef.json. CodeDeploy will take the image url from imageDetail.json, and map it against “IMAGE1_NAME”. However, I cannot make it work properly. It was always the error from CodeDeploy saying “Exception, failed to reference the image”. So I have to use the real image url pointing to my ECR registry. If I’m wrong please let me know:)
- imageDetail.json. Create in CodeBuild as part of its output artefact. It contains URL to the newly created Docker image. CodeDeploy uses this URL to fetch the image from ECR and create new tasks (containers).
post_build:commands:
....- printf ‘{“ImageURI”:”%s”}’ $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/mythicalmysfits/service:latest > imageDetail.json
End
I hope this blog would help you to understand most parts of how to set up CodePipeline with ECS Blue/Green deployment using AWS CLI.
For a complete list of the files that I use in this demo, take a look here.
Feel free to ping me if you have any questions, feedbacks or suggestions.