← Back to Blog

CI/CD Pipeline Explained: From Code Commit to Production Deploy

Manual deployments mean Friday afternoon anxiety, late-night hotfixes, and the classic "it worked on my machine" postmortem. CI/CD pipelines automate the entire path from code commit to production, catching bugs earlier and shipping faster. Here is how they work, end to end.

The Problem CI/CD Solves

Before CI/CD, a typical software release looked like this: developers worked in isolation on long-lived feature branches, merged everything at the last minute, spent two days resolving conflicts, handed a build artifact to an ops team, and crossed their fingers during a scheduled maintenance window. If something went wrong, rollback meant rebuilding from scratch.

The pain was real and well-documented. Studies by DORA (DevOps Research and Assessment) consistently show that high-performing engineering teams deploy multiple times per day with lower failure rates than teams that deploy infrequently. Counterintuitively, deploying more often means fewer bugs in production because each change is smaller and easier to reason about.

CI/CD is the practice that makes high-frequency, low-risk deployments possible.

CI vs CD: What Is the Difference?

The acronym is often used loosely. Here is the precise meaning of each term:

  • Continuous Integration (CI): Every code push triggers an automated build and test run. The goal is to detect integration problems immediately, before they accumulate. Developers merge to a shared branch frequently - at least daily.
  • Continuous Delivery (CD): Every change that passes CI is automatically packaged and deployed to a staging/pre-production environment. The artifact is always in a deployable state. Releasing to production may still require a human click (a release manager approves).
  • Continuous Deployment (also CD): Every change that passes all automated tests is deployed to production automatically, with no human gate. This is what companies like Amazon and Netflix practice - thousands of deployments per day.

Continuous Delivery requires a human to approve production deployments. Continuous Deployment removes that gate entirely. Both require Continuous Integration as a foundation.

The Anatomy of a CI/CD Pipeline

A pipeline is a sequence of stages. Each stage must pass before the next begins. A typical pipeline for a web service looks like this:

  1. Source: A git push or pull request triggers the pipeline.
  2. Build: Compile code, install dependencies, run linters, build Docker image.
  3. Test: Unit tests, integration tests, code coverage gates.
  4. Security scan: SAST (static analysis), dependency vulnerability scan (Dependabot, Snyk).
  5. Publish: Push artifact (Docker image, npm package, JAR) to a registry.
  6. Deploy to staging: Automated rollout to a staging environment.
  7. Smoke tests / E2E tests: Verify the deployed staging environment works end to end.
  8. Deploy to production: Automated (continuous deployment) or gated (continuous delivery).
  9. Post-deploy verification: Healthcheck, smoke test, alert monitoring.

GitHub Actions: A Complete Example

GitHub Actions is the most widely adopted CI/CD platform for open source and SaaS projects. Here is a production-ready workflow for a Node.js API:

# .github/workflows/ci-cd.yml
name: CI/CD Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linter
        run: npm run lint

      - name: Run unit tests
        run: npm test -- --coverage

      - name: Upload coverage
        uses: codecov/codecov-action@v4

  build-and-push:
    needs: test
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    permissions:
      contents: read
      packages: write

    steps:
      - uses: actions/checkout@v4

      - name: Login to Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          push: true
          tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}

  deploy-staging:
    needs: build-and-push
    runs-on: ubuntu-latest
    environment: staging

    steps:
      - name: Deploy to staging
        run: |
          kubectl set image deployment/api \
            api=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} \
            --namespace staging

      - name: Wait for rollout
        run: kubectl rollout status deployment/api --namespace staging --timeout=5m

      - name: Run smoke tests
        run: npm run test:smoke -- --env staging

  deploy-production:
    needs: deploy-staging
    runs-on: ubuntu-latest
    environment: production   # requires manual approval in GitHub

    steps:
      - name: Deploy to production
        run: |
          kubectl set image deployment/api \
            api=${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }} \
            --namespace production

Testing Strategy: The Pyramid

Not all tests are equal. The test pyramid is the guiding principle for CI speed and reliability:

  • Unit tests (base, fast): Test individual functions in isolation with mocked dependencies. Should run in seconds. You want hundreds of these.
  • Integration tests (middle): Test interactions between components - your service calling a real database or cache. Slower, but catch a different class of bugs. Run against a containerized test database (GitHub Actions services make this easy).
  • End to end tests (top, slow): Test the full user journey through a deployed environment. Use tools like Playwright or Cypress. Expensive to run and maintain - keep them focused on critical paths only.

A CI pipeline that runs only unit tests is fast but misses integration bugs. One that runs full E2E tests on every PR quickly becomes a bottleneck. A balanced pyramid gives you both speed and confidence.

Deployment Strategies

How you deploy matters as much as how you build. The main strategies:

Rolling Deployment

Replace old instances with new ones gradually. If you have 10 pods, replace 2 at a time until all are updated. Traffic is served from a mix of old and new versions during the rollout. Simple and supported natively by Kubernetes.

Blue/Green Deployment

Run two identical production environments: "blue" (current) and "green" (new). Deploy the new version to green, run smoke tests, then switch traffic instantly. If green fails, switch back to blue in seconds. Requires double the infrastructure but gives instant rollback.

Canary Deployment

Route a small percentage of traffic (e.g. 5%) to the new version. Monitor error rates and latency. If metrics look good, gradually increase the percentage. If something goes wrong, route 100% back to the old version. Argo Rollouts and Flagger automate this pattern for Kubernetes.

Feature Flags

Deploy code but control activation separately from deployment. A feature can be deployed to 100% of servers but enabled for only 1% of users. This decouples release from deployment and allows instant disabling without a rollback. Tools: LaunchDarkly, Unleash, Flagsmith.

Scan Your Site for Free

Our Exposure Checker runs 19 parallel security checks - SSL, headers, exposed paths, DNS, open ports, and more.

Run Free Security Scan

GitLab CI: An Alternative Syntax

GitLab CI uses a .gitlab-ci.yml file. The concepts are identical but the syntax differs:

# .gitlab-ci.yml
stages:
  - test
  - build
  - deploy

variables:
  IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

test:
  stage: test
  image: node:20
  cache:
    paths: [node_modules/]
  script:
    - npm ci
    - npm run lint
    - npm test

build:
  stage: build
  script:
    - docker build -t $IMAGE .
    - docker push $IMAGE
  only:
    - main

deploy-staging:
  stage: deploy
  script:
    - kubectl set image deployment/api api=$IMAGE -n staging
  environment:
    name: staging
  only:
    - main

deploy-production:
  stage: deploy
  script:
    - kubectl set image deployment/api api=$IMAGE -n production
  environment:
    name: production
  when: manual   # requires a human click
  only:
    - main

Secrets Management in CI/CD

CI/CD pipelines need credentials: Docker registry passwords, cloud provider keys, database URLs. Never hard-code secrets in your pipeline YAML. The right approach depends on your platform:

  • GitHub Actions: Store secrets in repository or organization settings. Reference with ${{ secrets.MY_SECRET }}. For AWS, use OIDC instead of long-lived access keys.
  • GitLab CI: CI/CD variables in project settings. Mark as "masked" to prevent them appearing in logs.
  • Vault / AWS Secrets Manager: For production, inject secrets at runtime rather than baking them into the build environment.

FAQ

What is the difference between a CI pipeline and a CD pipeline?

A CI pipeline runs on every code change and validates that the codebase is healthy: it builds, lints, and tests. A CD pipeline takes a validated artifact and deploys it to one or more environments. In practice, most teams configure both in a single workflow file where deployment stages only run when tests pass.

Do I need CI/CD for a small project?

Yes, even for solo projects. A basic CI pipeline that runs tests on every push takes 30 minutes to set up and pays dividends immediately. GitHub Actions is free for public repos and has a generous free tier for private repos. The earlier you add CI, the easier it is to maintain as the project grows.

What is a pipeline artifact?

An artifact is the output of a build stage that is passed to subsequent stages. For a Docker-based service, the artifact is the Docker image pushed to a container registry. For a Node.js app, it might be a .tar.gz archive. Artifacts ensure that what you test is exactly what you deploy - the same binary, not a rebuilt one.

How do I handle database migrations in CI/CD?

Database migrations are the trickiest part of automated deployment. The safest pattern is: run migrations before deploying the new application code (the new code should be backwards-compatible with the old schema during the transition window). Never run destructive migrations (dropping columns, renaming tables) in the same deploy as the code that depends on the new schema. Use a separate migration job, a pre-sync hook (like ArgoCD's PreSync), or a deploy job container that runs migrations as its first step.

What should I do when a production deployment fails?

Automate rollback as part of your pipeline. In Kubernetes, kubectl rollout undo deployment/api rolls back to the previous ReplicaSet. For blue/green, switch traffic back to the blue environment. The key is having a post-deploy healthcheck step in your pipeline that automatically triggers rollback if the new version fails its health probe within a defined timeout.

Validate your GitHub Actions or GitLab CI YAML before pushing to avoid wasted pipeline runs: Use our free YAML Validator here →

UK
Written by Usman Khan
DevOps Engineer | MSc Cybersecurity | CEH | AWS Solutions Architect

Usman has 10+ years of experience securing enterprise infrastructure, managing high-traffic servers, and building zero-knowledge security tools. Read more about the author.