CICDโ™พ๏ธ in AWS is composed of the following services. Most enterprises use most or some of them together to give the Continuous Integration and Continuous Delivery experience.

  • โœ๏ธ CodeCommit โ€“ version control
  • ๐Ÿšฐ CodePipeline โ€“ automating releases from code to deployment
  • ๐Ÿ—๏ธ CodeBuild โ€“ building and testing code
  • ๐Ÿš€ CodeDeploy โ€“ deploying the code to EC2 instances, Elastic Beanstalk, ECS…
  • โœจ CodeStar โ€“ manage software development activities in one place
  • ๐Ÿ“ฆ๏ธ CodeArtifact โ€“ store, publish, and share software packages
  • ๐Ÿ”Ž CodeGuru โ€“ automated code reviews using Machine Learning



  • Interactions are done using Git (standard)
  • Authentication:
    • SSH Keys โ€“ AWS Users can configure SSH keys in their IAM Console
    • HTTPS โ€“ with AWS CLI Credential helper or Git Credentials for IAM user
  • Authorization
    • IAM policies to manage users/roles permissions to repositories
  • Encryption
    • Repositories are automatically encrypted at rest using AWS KMS
    • Encrypted in transit (can only use HTTPS or SSH โ€“ both secure)
  • Cross-account Access
    • Do NOT share your SSH keys or your AWS credentials
    • Use an IAM Role in your AWS account and use AWS STS (AssumeRole API)
  • By default, a user who has push permissions to a CodeCommit repository can contribute to any branch
  • Use IAM policies to restrict users to push or merge code to a specific branch - Example: only senior developers can push to production branch.

Note: Resource Policy is not supported yet.

  • You can monitor CodeCommit events in EventBridge (near real-time). So anytime a pullRequestCreated, pullRequestStatusChanged, referenceCreated, commentOnCommitCreated…and so on you can react to that via EventBridge like so:

    CodeCommit Eventbridge

  • You can migrate a project hosted on another Git repository (e.g., Github, GitLab…) to CodeCommit repository

    CodeCommit Migration

CodeCommit Vs GitHub


  • A fully managed continuous integration (CI) service

  • Continuous scaling (no servers to manage or provision โ€“ no build queue)

  • Compile source code, run tests, produce software packages…

  • Alternative to other build tools (e.g., Jenkins)

  • Charged per minute for compute resources (time it takes to complete the builds)

  • Leverages Docker under the hood for reproducible builds

  • Use prepackaged Docker images or create your own custom Docker image

  • Security:

    • Integration with KMS for encryption of build artifacts
    • IAM for CodeBuild permissions, and VPC for network security
    • AWS CloudTrail for API calls logging
  • Source โ€“ CodeCommit, S3, Bitbucket, GitHub

  • Build instructions: Code file buildspec.yml at the project root.

  • Output logs can be stored in Amazon S3 & CloudWatch Logs

  • Use CloudWatch Metrics to monitor build statistics

  • Use EventBridge to detect failed builds and trigger notifications

  • Use CloudWatch Alarms to notify if you need thresholds for failures

  • Build Projects can be defined within CodePipeline or CodeBuild

How it Works

CodeBuild โ€“ Operation

  • buildspec.yml file must be at the root of your code
  • env โ€“ define environment variables
    • variables โ€“ plaintext variables
    • parameter-store โ€“ variables stored in SSM Parameter Store
    • secrets-manager โ€“ variables stored in AWS Secrets Manager
  • phases โ€“ specify commands to run:
    • install โ€“ install dependencies you may need for your build
    • pre_build โ€“ final commands to execute before build
    • Build โ€“ actual build commands
    • post_build โ€“ finishing touches (e.g., zip output)
  • artifacts โ€“ what to upload to S3 (encrypted with KMS)
  • cache โ€“ files to cache (usually dependencies) to S3 for future build speedup
Environment Variables
  • Default Environment Variables
    • Defined and provided by AWS
  • Custom Environment Variables
    • Static โ€“ defined at build time (override using start-build API call)
    • Dynamic โ€“ using SSM Parameter Store and Secrets Manager
Inside VPC
  • By default, your CodeBuild containers are launched outside your VPC
  • It cannot access resources in a VPC
  • You can specify a VPC configuration:
    • VPC ID
    • Subnet IDs
    • Security Group IDs
  • Then your build can access resources in your VPC (e.g., RDS, ElastiCache, EC2, ALB…)
  • Use cases: integration tests, data query, internal load balancers..
Validate Pull Requests
  • Validate proposed code changes in PRs before they get merged๐Ÿ”€
  • Ensure high level of code quality and avoid code conflicts

CodeBuild โ€“ Validate Pull Requests

Test Reports
  • Contains details about tests that are run during builds
  • Unit tests, configuration tests, functional tests
  • Create your test cases with any test framework that can create report files in the following format:
    • JUnit XML, NUnit XML, NUnit3 XML
    • Cucumber JSON, TestNG XML, Visual Studio TRX
  • Create a test report and add a Report Group name in buildspec.yml file with information about your tests


AWS CodeDeploy is a fully managed service that automates your software deployments to a variety of compute services, including Amazon EC2, AWS Fargate, AWS Lambda, or on-premises servers.

Deployment Types

When using CodeDeploy, there are two types of deployments available to you: in-place and blue/green.

  • The application on each instance is stopped, the latest application revision is installed, and the new version of the application is started and validated.
  • Only deployments that use the Amazon EC2 or on-premises compute platform can use in-place deployments.

CodeDeploy In-Place

  • A blue/green deployment is used to update your applications while minimizing interruptions caused by the changes of a new application version.
  • CodeDeploy provisions your new application version alongside the old version before rerouting your production traffic.
  • This means during deployment, youโ€™ll have two versions of your application running at the same time.
  • When using a blue/green deployment, you have several options for shifting traffic to the new green environment: Linear, Canary, AllAtOnce.
  • All Lambda and Amazon ECS deployments are blue/green. An Amazon EC2 or on-premises deployment can be in-place or blue/green.

CodeDeploy Blue/Green

How it works

To automate the deployment to the appropriate compute resources, CodeDeploy needs to know:

  • ๐Ÿ‘‰which files to copy - โžก๏ธ appSpec.yml
  • ๐Ÿ‘‰what scripts to run - โžก๏ธ deployment configuration
  • ๐Ÿ‘‰where to deploy โ€ƒ- โžก๏ธ deployment group

CodeDeploy Overview

The concept of an application is used by CodeDeploy to ensure it knows what to deploy (code), where to deploy (deployment group), and how to deploy (deployment configuration).

Deployment group
  • A deployment group specifies the deployment targeted environment. The information it contains is specific to the target compute platform: AWS Lambda, Amazon ECS, Amazon EC2, or on-premises.
  • A CodeDeploy application can have one or more deployment groups.
  • Security needs to be assigned so the environment can communicate with CodeDeploy.
  • The CodeDeploy agent is needed if you are deploying to Amazon EC2 or an on-premises compute platform. It is installed and configured on the target instances. It accepts and executes requests on behalf of CodeDeploy.
Deployment configuration
  • A deployment configuration is a set of deployment rules and deployment success and failure conditions used by AWS CodeDeploy during a deployment.

  • CodeDeploy can deploy your application on EC2 instances, ECS containers, Lambda functions, and even an on-premises environment. Each deployment platform requires a deployment configuration. CodeDeploy has predefined deployment configurations that are unique to each compute platform:

  • CodeDeployDefault.AllAtOnce
    • Attempts to deploy an application revision to as many instances as possible at once
  • CodeDeployDefault.HalfAtATime
    • Deploys to up to half of the instances at a time (with fractions rounded down)
  • CodeDeployDefault.OneAtATime
    • Deploys the application revision to only one instance at a time

  • CodeDeployDefault.ECSLinear10PercentEvery1Minutes
    • Shifts 10 percent of traffic every minute until all traffic is shifted
  • CodeDeployDefault.ECSLinear10PercentEvery3Minutes
    • Shifts 10 percent of traffic every 3 minutes until all traffic is shifted
  • CodeDeployDefault.ECSCanary10Percent5Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 5 minutes later
  • CodeDeployDefault.ECSCanary10Percent15Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 15 minutes later
  • CodeDeployDefault.ECSAllAtOnce
    • Shifts all traffic to the updated Amazon ECS container at once

  • CodeDeployDefault.LambdaLinear10PercentEvery1Minute
    • Shifts 10 percent of traffic every minute until all traffic is shifted
  • CodeDeployDefault.LambdaLinear10PercentEvery2Minutes
    • Shifts 10 percent of traffic every 2 minutes until all traffic is shifted
  • CodeDeployDefault.LambdaLinear10PercentEvery3Minutes
    • Shifts 10 percent of traffic every 3 minutes until all traffic is shifted
  • CodeDeployDefault.LambdaLinear10PercentEvery10Minutes
    • Shifts 10 percent of traffic every 10 minutes until all traffic is shifted
  • CodeDeployDefault.LambdaCanary10Percent5Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 5 minutes later
  • CodeDeployDefault.LambdaCanary10Percent10Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 10 minutes later
  • CodeDeployDefault.LambdaCanary10Percent15Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 15 minutes later
  • CodeDeployDefault.LambdaCanary10Percent30Minutes
    • Shifts 10 percent of traffic in the first increment, and the remaining 90 percent is deployed 30 minutes later
  • CodeDeployDefault.LambdaAllAtOnce
    • Shifts all traffic to the updated Lambda functions at once
  • Identify the correct version๐Ÿ”– (revision) of the code.
  • With the code, you provide an application specification file appSpec.yml file which is used to manage each deployment. During deployment, CodeDeploy looks for your AppSpec file in the root directory of the application’s source.
  • The AppSpec file specifies where to copy the code and how to get it running.
  • The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.
  • Lifecycle hooks are areas where you can specify Lambda functions or local scripts to run tests, healthchecks and verify the deployment of your application was successful.
  • Some tests might be as simple as checking a dependency before an application is installed using the BeforeInstall hook. Some might be as complex as checking your applicationโ€™s output before allowing production traffic to flow through using the BeforeAllowTraffic hook.
  • The structure of the AppSpec file can differ depending on the compute platform you choose:



  • AfterAllowTestTraffic โ€“ run AWS Lambda function after the test ELB Listener serves traffic to the Replacement ECS Task Set like perform health checks on the application and trigger a rollback if the health checks are not successful

  • BeforeAllowTraffic and AfterAllowTraffic hooks can be used to check the health of the Lambda function

Here is an example of an AppSpec file for an in-place deployment to an EC2 instance.

version: 0.0
os: linux
  - source: Config/config.txt
    destination: /webapps/Config
  - source: source
    destination: /webapps/myApp
    - location: Scripts/
    - location: Scripts/
    - location: Scripts/
      timeout: 180
    - location: Scripts/
      timeout: 3600
    - location: Scripts/
      timeout: 3600
      runas: codedeployuser

Here is an example of an AppSpec file written in YAML for deploying an Amazon ECS service.

version: 0.0
  - TargetService:
      Type: AWS::ECS::Service
        TaskDefinition: "arn:aws:ecs:us-east-1:<account-id>:task-definition/task-definition-name:1"
          ContainerName: "SampleApplicationName"
          ContainerPort: 80
# Optional properties
        PlatformVersion: "LATEST"
            Subnets: ["subnet-1234abcd","subnet-5678abcd"]
            SecurityGroups: ["sg-12345678"]
            AssignPublicIp: "ENABLED"
          - Base: 1
            CapacityProvider: "FARGATE_SPOT"
            Weight: 2
          - Base: 0
            CapacityProvider: "FARGATE"
            Weight: 1
  - BeforeInstall: "LambdaFunctionToValidateBeforeInstall"
  - AfterInstall: "LambdaFunctionToValidateAfterInstall"
  - AfterAllowTestTraffic: "LambdaFunctionToValidateAfterTestTrafficStarts"
  - BeforeAllowTraffic: "LambdaFunctionToValidateBeforeAllowingProductionTraffic"
  - AfterAllowTraffic: "LambdaFunctionToValidateAfterAllowingProductionTraffic"

Here is an example of an AppSpec file written in YAML for deploying a Lambda function version.

version: 0.0
  - myLambdaFunction:
      Type: AWS::Lambda::Function
        Name: "myLambdaFunction"
        Alias: "myLambdaFunctionAlias"
        CurrentVersion: "1"
        TargetVersion: "2"
  - BeforeAllowTraffic: "LambdaFunctionToValidateBeforeTrafficShift"
  - AfterAllowTraffic: "LambdaFunctionToValidateAfterTrafficShift"

Health checks are tests performed on resources. These resources might be your application, compute resources like Amazon Elastic Cloud Compute (Amazon EC2) instances, and even your Elastic Load Balancers.

Health checks can be implemented in the deployment of your application in several different ways. One is with CodeDeploy and the help of your application specification (AppSpec) file.

Liveness checks

Liveness checks test the basic connectivity to a service and the presence of a server process. They are often performed by a load balancer or external monitoring agent, and they are unaware of the details about how an application works.

Some examples of liveness checks include:

  • Tests that confirm a server is listening on its expected port and accepting new TCP connections
  • Tests that perform basic HTTP requests and make sure the server responds with a HTTP 200 status code
  • Status checks for Amazon EC2 that test for network reachability necessary for any system to operate.
Local health checks

Local health checks go further than liveness checks to verify that the application is likely to be able to function.

Some examples of situations local health checks can identify are:

  • Ability to write to or read from disk.
  • Missing support processes: Hosts missing their monitoring daemons might leave operators unaware of the health of their services.
Dependency health checks

Dependency health checks thoroughly inspect the ability of an application to interact with its adjacent systems. These checks ideally catch problems local to the server, such as expired credentials, that are preventing it from interacting with a dependency. They can also have false positives when there are problems with the dependency itself.

  • A process might asynchronously look for updates to metadata or configuration but the update mechanism might be broken on a server. The server can become significantly out of sync with its peers.
  • Inability to communicate with peer servers or dependencies. Software issues, such as deadlocks or bugs in connection pools, can also hinder network communication.
  • Other unusual software bugs that require a process bounce: Deadlocks, memory leaks, or state corruption bugs can make a server spew errors.
Anomaly detection

Anomaly detection checks all servers in a fleet to determine if any server is behaving oddly compared to its peers. By aggregating monitoring data per server, you can continuously compare error rates, latency data, or other attributes to find anomalous servers and automatically remove them from service. Anomaly detection can find divergence in the fleet that a server cannot detect about itself, such as the following:

  • Clock skew: When servers are under high load, their clocks have been known to skew abruptly and drastically. Security measures, such as those used to evaluate signed requests, require that the time on a client’s clock is within five minutes of the actual time. If it is not, requests fail.
  • Old code: A server might disconnect from the network or power off for a long time. When it comes back on line, it could be running dangerously outdated code that is incompatible with the rest of the fleet.
  • Any unanticipated failure mode: Sometimes, servers fail in such a way that they return errors they identify as the clientโ€™s instead of theirs (HTTP 400 instead of 500). Servers might slow down instead of failing, or they might respond faster than their peers, which is a sign theyโ€™re returning false responses to their callers.

While health checks can identify problems, the key to a successful continuous delivery strategy is to also implement remediations when these tests fail. You can build logic into your tests that indicate to CodeDeploy that the deployment was unsuccessful and start the rollback process.

  • Rolling deployment

    • With a rolling deployment, your production fleet is divided into groups so the entire fleet isnโ€™t upgraded all at once. Your fleet will run both the new and existing software versions during the deployment process.

    • This method enables a zero-downtime update. If the deployment fails, only the upgraded portion of the fleet will be affected.

    • With rolling deployments, you are updating your live production environment.

You can use a variety of rolling deployment options through CodeDeploy:

Deploys the application revision to only one instance at a time.

Deploys to as many as half of the instances at a time (with fractions rounded down). The overall deployment succeeds if the application revision is deployed to at least half of the instances (with fractions rounded up). Otherwise, the deployment fails.

Deploys a set number or percentage of resources selected by you, at time intervals you specify.

There are some obvios Pros and Cons to Rolling deployment:

Zero downtimeSpeed: Because resources are deployed in small increments, it could take a long time to deploy all the necessary hosts in a large environment.
Lower overall risk of bringing down your entire production applicationComplexity: There are two different application versions during the deployment operating at once in the same environment. You will need to make sure your application can handle interoperability between these versions.
No additional resources required, which minimizes deployment costsRollback: If a resource in a rolling deployment fails to deploy correctly, reverting to a previous version can be complicated. It might require you to redeploy the previous application version in a new resource or fix the failed resource manually.

With a blue/green deployment, you provision a new set of containers on which CodeDeploy installs the latest version of your application. CodeDeploy then reroutes load balancer traffic from an existing set of containers running the previous version of your application to the new set of containers running the latest version. After traffic is rerouted to the new containers, the existing containers can be terminated. Blue/green deployments allow you to test the new application version๐Ÿ”– before sending production traffic to it.

Codepipeline โ€“ BlueGreen

If there is an issue with the newly deployed application version, you can roll back to the previous version faster than with in-place deployments.


AWS CodePipeline is a fully managed continuous delivery service that enables you to model, visualize, and automate the steps required to release your software.

  • Visual Workflow to orchestrate your CICD
  • ๐Ÿ“Source โ€“ CodeCommit, ECR, S3, Bitbucket, GitHub
  • ๐Ÿ—๏ธBuild โ€“ CodeBuild, Jenkins, CloudBees, TeamCity
  • ๐ŸงชTest โ€“ CodeBuild, AWS Device Farm, 3rd party tools…
  • ๐Ÿš€Deploy โ€“ CodeDeploy, Elastic Beanstalk, CloudFormation, ECS, S3…
  • ๐Ÿ“žInvoke โ€“ Lambda, Step Functions
  • Consists of stages:
    • Each stage can have sequential actions and/or parallel actions
    • Example: Build โžก๏ธ Test โžก๏ธ Deploy โžก๏ธ Load Testing โžก๏ธ โ€ฆ
    • Manual approval can be defined at any stage

Codepipeline โ€“ Artifacts