How to Connect Azure DevOps to AWS Using AWS Toolkit and Service Connections (Real Implementation Guide)
top of page

Azure DevOps to AWS with AWS Toolkit + Service Connection

  • victoriagimenez5
  • 1 hour ago
  • 5 min read
azure-devops-a-aws

Based on a real project implementation + official references

Azure DevOps does NOT support native OIDC federation to AWS. Even though it can theoretically issue a token:


  • It cannot be used for write operations

  • It does NOT work with SigV4

  • It is NOT an officially supported method by AWS


Therefore:

  • We had to install the AWS Toolkit.

  • We had to create an AWS Service Connection.

  • Only then Azure Pipelines could write to AWS (ECR/Mira, S3, Terraform, etc.).


This is exactly what we implemented in a real project, and it aligns 100% with both vendors (AWS and Microsoft Azure).


1. Official verification: Azure DevOps does NOT support OIDC federation to AWS


Microsoft states this explicitly:

Azure DevOps AWS Service Connection → Requiere Access Key o STS con AWS Toolkit



Key statement:

“To connect to AWS, you must provide an AWS Access Key ID and Secret Access Key.”No OIDC mentioned anywhere.


AWS also confirms that Azure DevOps connects via AWS Toolkit, not OIDC: https://docs.aws.amazon.com/vsts/latest/userguide/welcome.html


“Authentication is performed using AWS service connections… configured with Access Keys or used to assume roles.”


2. Install AWS Toolkit in Azure DevOps


(Exactly as we did with our client)

  1. Go to Azure DevOps.

  2. Navigate to: Organization Settings → Extensions → Browse Marketplace.

  3. Search for “AWS Toolkit for Azure DevOps”.

  4. Install it in the organization or the project.


ree

📘 Official documentation:


3. Create an AWS Service Connection (critical step)

Once AWS Toolkit is installed, Azure DevOps enables a new connection type called AWS, which is required for pipelines to authenticate correctly against AWS using AssumeRole and to generate SigV4 signatures for write operations (ECR, S3, Terraform, ECS, etc.).


3.1. Create a new Service Connection

Go to:

Project Settings → Service connections → New service connection


Select:

AWS (provided by AWS Toolkit)


ree

3.2. Complete the form required by the AWS Toolkit

You’ll be asked for:

  • Role ARN Azure DevOps should assume

  • External ID (recommended if used in the Trust Policy)

  • Default AWS Region

  • Connection name

  • Scope: Project-scoped or Organization-scoped


Then click Verify.

If the trust policy is set correctly in AWS, the connection shows:


Verified successfully


This validates that Azure DevOps can invoke sts:AssumeRole.


ree


ree

Note about “Use OIDC” option

This field does NOT establish a full OIDC federation flow to AWS.

Azure DevOps does not support OIDC for AWS SigV4 signing.

Therefore, DO NOT use it. The supported method is AssumeRole through the AWS Toolkit.


3.3. Where to view the Service Connection

Once created, it appears under: Project Settings → Service connections


There you can see:

  • Connection type

  • Assigned name

  • Description

  • Usage history

  • Approval configuration


ree


4. Create an IAM Role for Azure DevOps (AssumeRole)

This is what we created on AWS.

Example trust policy:

{
 "Version": "2012-10-17",
 "Statement": [
   {
     "Effect": "Allow",
     "Principal": {
       "AWS": "arn:aws:iam::<ACCOUNT_ID>:user/<TOOLKIT-USER>"
     },
     "Action": "sts:AssumeRole",
     "Condition": {
        "StringEquals": {
          "sts:ExternalId": "<EXTERNAL_ID_GENERADO>"
      }
     }
   }
 ]
}

Note: <TOOLKIT-USER> is an IAM User created by the client specifically for use by the AWS Toolkit inside Azure DevOps. It is NOT generated automatically.


4.1. Terraform policy example


# Policy para ECR Push desde Azure DevOps
resource "aws_iam_role_policy" "azure_devops_ecr_policy" {
 name = "ECR-Push-Policy"
 role = aws_iam_role.azure_devops_ecr.id


 policy = jsonencode({
   Version = "2012-10-17"
   Statement = [
     # ECR push
     {
       Effect = "Allow"
       Action = [
         "ecr:GetAuthorizationToken",
         "ecr:BatchCheckLayerAvailability",
         "ecr:InitiateLayerUpload",
         "ecr:UploadLayerPart",
         "ecr:CompleteLayerUpload",
         "ecr:PutImage"
       ]
       Resource = "*"
     },


     # STS caller identity (para validación en pipeline)
     {
       Effect = "Allow"
       Action = [
         "sts:GetCallerIdentity"
       ]
       Resource = "*"
     },


     # SSM Read-only parameters
     {
       Effect = "Allow"
       Action = [
         "ssm:GetParameter",
         "ssm:GetParameters",
         "ssm:GetParametersByPath"
       ]
       Resource = [
         "arn:aws:ssm:${local.region}:${data.aws_caller_identity.current.id}:parameter/${local.project}/${local.environment}/*",
         "arn:aws:ssm:${local.region}:${data.aws_caller_identity.current.id}:parameter/${local.project}-star/${local.environment}/*"
       ]
     },


     # S3 as needed
     {
       Effect = "Allow"
       Action = [
         "s3:PutObject",
         "s3:GetObject",
         "s3:ListBucket",
         "s3:GetBucketLocation"
       ]
       Resource = [
         "arn:aws:s3:::los-archivos-data-temp",
         "arn:aws:s3:::los-archivos-data-temp/*"
       ]
     }
   ]
 })
}

5. Assign IAM permissions so Azure DevOps can write to AWS


Examples used in the project:

ECR permissions:


{
 "Effect": "Allow",
 "Action": [
   "ecr:GetAuthorizationToken",
   "ecr:BatchCheckLayerAvailability",
   "ecr:PutImage",
   "ecr:InitiateLayerUpload",
   "ecr:UploadLayerPart",
   "ecr:CompleteLayerUpload"
 ],
 "Resource": "*"
}

Terraform typically needs:

  • IAM

  • S3 backend

  • DynamoDB Lock

  • ECS/ECR updates


6. Use the Service Connection from YAML

Typical example:

- task: AWSCLI@1
 inputs:
   awsCredentials: 'aws-connection-name'
   regionName: 'us-east-1'
   awsCommand: 'sts'
   awsSubCommand: 'get-caller-identity'
 displayName: "Validar identidad en AWS"

This is what we used to validate the connection.


ree


7. Synthesized Example: Build & Push to AWS ECR using the Service Connection


Below is the recommended pipeline for building and pushing a Docker image to AWS ECR using an AWS Service Connection — no Access Keys, no IAM users, everything via AssumeRole through AWS Toolkit.


# Simplified example of Build & Push to AWS ECR


trigger:
 branches:
   include:
     - master
pool:
 vmImage: 'ubuntu-latest'


variables:
 - name: AWS_REGION
   value: 'eu-central-1'
 - name: ECR_ACCOUNT_ID
   value: '123456789'
 - name: IMAGE_NAME
   value: 'prod/webapp'
steps:
 - checkout: self


 # Obtener token de SSM (opcional)
 - task: AWSShellScript@1
   displayName: 'Fetch NuGet Token from SSM'
   inputs:
     awsCredentials: 'AWS-serviceConnection'
     regionName: '$(AWS_REGION)'
     scriptType: 'inline'
     inlineScript: |
       TOKEN=$(aws ssm get-parameter --name "/serenity/production/VSS_NUGET_ACCESSTOKEN" --with-decryption --query 'Parameter.Value' --output text)
       echo "##vso[task.setvariable variable=NUGET_TOKEN;issecret=true]${TOKEN}"


 # Setear tag corto basado en el commit
 - script: |
     SHORT_SHA=$(echo "$(Build.SourceVersion)" | cut -c1-7)
     echo "##vso[task.setvariable variable=IMAGE_TAG]${SHORT_SHA}"
   displayName: "Set image tag"


 # Build & Push a ECR
 - task: AWSShellScript@1
   displayName: 'Build & Push Docker Image to ECR'
   inputs:
     awsCredentials: 'AWS-serviceConnection'
     regionName: '$(AWS_REGION)'
     scriptType: 'inline'
     inlineScript: |
       set -euxo pipefail
       REGISTRY="${ECR_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
       IMAGE_URI="${REGISTRY}/${IMAGE_NAME}"
       # Login a ECR usando STS + Role federado (sin credenciales)
       aws ecr get-login-password --region "${AWS_REGION}" | docker login --username AWS --password-stdin "${REGISTRY}"


       # Build
       docker build -f Dockerfile.webapp -t "${IMAGE_URI}:${IMAGE_TAG}" .
       # Tags
       docker tag "${IMAGE_URI}:${IMAGE_TAG}" "${IMAGE_URI}:latest"


       # Push
       docker push "${IMAGE_URI}:${IMAGE_TAG}"
       docker push "${IMAGE_URI}:latest"
   env:
     IMAGE_TAG: $(IMAGE_TAG)
     NUGET_TOKEN: $(NUGET_TOKEN)

7.1 Super-reduced version (ideal for a final snippet)


# Build & Push a AWS ECR usando AWS Toolkit + Service Connection
- task: AWSShellScript@1
 inputs:
   awsCredentials: 'AWS-serviceConnection'
   regionName: 'eu-central-1'
   scriptType: 'inline'
   inlineScript: |
     set -euxo pipefail
     REGISTRY="123456789.dkr.ecr.eu-central-1.amazonaws.com"
     IMAGE="${REGISTRY}/prod/webapp"
     TAG=$(echo "$(Build.SourceVersion)" | cut -c1-7)


     aws ecr get-login-password --region eu-central-1 \
       | docker login --username AWS --password-stdin "${REGISTRY}"


     docker build -t "${IMAGE}:${TAG}" .
     docker push "${IMAGE}:${TAG}"
     docker tag "${IMAGE}:${TAG}" "${IMAGE}:latest"
     docker push "${IMAGE}:latest"

Security First


This model aligns with AWS and industry best practices:

  • No static credentials

  • No IAM Users

  • All authentication uses STS with temporary credentials

  • Native SigV4 signing

  • Full CloudTrail traceability

  • Zero secrets stored in Azure DevOps

This significantly improves deployment security and simplifies daily operations, fully aligned with the Security pillar of the AWS Well-Architected Framework.



Note on NuGet tokens (.NET in Azure DevOps)


In .NET environments, Azure DevOps requires a private NuGet token to restore packages from NuGet.org or Azure Artifacts.This token is not part of AWS authentication.


Following a “no-secrets-in-pipelines” approach:

  • The token is stored in AWS Systems Manager Parameter Store

  • Azure DevOps retrieves it dynamically through the federated role using SSM GetParameter


This ensures even .NET ecosystem artifacts are consumed without exposing secrets, maintaining a fully credential-less pipeline architecture.



silvio-depetri



Silvio Depetri

Cloud Engineer

 
 
bottom of page