Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Shunde Zhang
on 21 October 2022

Integrating Charmed Kubernetes with AWS via OIDC


Canonical’s Charmed Kubernetes is a suite of open-source Kubernetes software bundled with many addons, including CNIs, CSIs, monitoring tools and cloud integrations. As Juju supports AWS, Charmed Kubernetes can be deployed and run on AWS seamlessly. In fact, many of our customers run their Charmed Kubernetes on AWS for production workload. In such deployment, it is very natural for a pod to access AWS resources. While it is easy to place your IAM access key and secret key in the pod or a ConfigMap to get access to AWS services, it causes security issues too. For example, these keys can be exposed to others by simply retrieving the content of pod configuration or ConfigMap data. Moreover, it is common to rotate static keys periodically to reduce the impact of key loss; even if the keys are stolen, the thief can only use these keys for a short period of time, not forever. However, rotating these keys is another operation task which adds to everyday work.

AWS provides STS (security token service) that allows a service or a machine to acquire short-lived tokens to access AWS resources. This feature is available on AWS managed Kubernetes service, EKS, via the use of OIDC provider in IAM. To set this up, the user needs to enable OIDC endpoint on EKS, then create an OIDC provider in IAM pointing to that OIDC endpoint. This way IAM trusts tokens coming from this OIDC provider and hence issues STS tokens in return. Detail configuration can be found in this AWS document. Under the hood, k8s pods rely on the EKS web identity webhook to acquire a STS token. Fortunately, AWS has open-sourced this webhook so users can set it up in any Kubernetes cluster, no matter running in AWS or on-prem, to integrate with AWS via OIDC.

OIDC in Kubernetes

As of today’s latest version 1.25, Kubernetes has built-in support for OIDC, which is documented in KEP 1393. According to the official Kubernetes document, OIDC is provided by a feature called Service Account Issuer Discovery. This feature became stable in Kubernetes version 1.21, and in the current version it is enabled when the feature Service Account Token Volume Projection is enabled. It provides two OIDC-compliant endpoints, an OpenID Provider Configuration document at https://api_server:port/.well-known/openid-configuration and the associated JSON Web Key Set (JWKS) at https://api_server:port/openid/v1/jwks. The JWKS document contains public keys that a relying party can use to validate the Kubernetes service account tokens. Relying parties first query for the OpenID Provider Configuration document, then use the value of jwks_uri field in the response to get the JWKS document. Another feature, Service Account Token Volume Projection, projects a service account token into a pod in a mounted volume. This service account token will then be sent to AWS in exchange for an AWS STS token, which can be used to access AWS resources. Since we configure AWS to trust this OIDC provider and its public keys, the service account tokens issued by these public keys are also trusted by AWS. Hence AWS can issue STS tokens to this OIDC provider, which is a Kubernetes cluster. Token exchange is done automatically by the EKS web identity webhook.

Kubernetes API server and S3 Configuration for OIDC

This article describes details about how to set up OIDC in a charmed Kubernetes and integrate it with AWS. We suppose the Kubernetes has already been deployed on AWS with Juju. Firstly we need to define OIDC issuer URL and JWKS URL in API server. Normally Kubernetes API server endpoint is not available to the public Internet. However they need to be publicly accessible for IAM/STS to retrieve OIDC configuration documents. As recommended by the EKS web identify webhook document, we take the content of both endpoints from Kubernetes API server and place them in AWS S3 to make them publicly accessible. Before that, we need to put those S3 public endpoint URLs of OIDC documents in API server’s flags service-account-issuer and service-account-jwks-uri to let Kubernetes know so it can include the new URLs in the content of OIDC documents.

For example, I have created a S3 bucket called charmed-k8s-oidc, and a folder called o7k in this bucket to host the OIDC configuration documents. The bucket needs to have ACL public read and both configuration documents need to have ACL public read too. So both URLs will look like the following:

https://charmed-k8s-oidc.s3.amazonaws.com/o7k
https://charmed-k8s-oidc.s3.amazonaws.com/o7k/openid/v1/jwks

Then we can use juju command to configure them in API server.

juju config kubernetes-control-plane api-extra-args="service-account-issuer=https://charmed-k8s-oidc.s3.amazonaws.com/o7k service-account-jwks-uri=https://charmed-k8s-oidc.s3.amazonaws.com/o7k/openid/v1/jwks"

Wait a bit for Juju to make the change and restart Kubernetes control plane services. Next we need to get the content of config files and upload them to S3. For security reasons, all requests to the API server need a token. We can use any valid token for this. For example, we can use the token from the default service account.

Firstly we get its secret name with:

kubectl get sa default -o json | jq -Mr '.secrets[].name | select(contains("token"))'

For instance if the returned secret name is default-token-w6vvd, we can get its token with:

kubectl get secret default-token-w6vvd -o json | jq -Mr '.data.token' | base64 -d

Next we can get the content of those OIDC configuration documents with the token from above output.

curl -k -H "Authorization: Bearer $TOKEN" -k https://API_SERVER:6443/.well-known/openid-configuration
curl -k -H "Authorization: Bearer $TOKEN" -k https://API_SERVER:6443/openid/v1/jwks

Now we can upload the content to charmed-k8s-oidc bucket in S3 as following files:

/o7k/.well-known/openid-configuration
/o7k/openid/v1/jwks

AWS Configuration

After configuring Kubernetes API server and S3, we can create an identity provider in IAM. Details about how to create an identity provider is out of the scope of this document and can be found in AWS documents. Briefly speaking, when creating the identity provider, its provider URL points to our S3 bucket https://charmed-k8s-oidc.s3.amazonaws.com/o7k, and the audience is sts.amazonaws.com.

Next we need to create an IAM role to define what this identity provider can do. This role has a trusted policy to allow our new identify provider to assume this role. We also specify in the trusted policy to only allow a particular service account or all service accounts in a particular namespace assume this role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::xxxxx:oidc-provider/charmed-k8s-oidc.s3.amazonaws.com/o7k"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringLike": {
                    "charmed-k8s-oidc.s3.amazonaws.com/o7k:sub": "system:serviceaccount:default:*"
                }
            }
        }
    ]
}

In this example we allow all service accounts in the default namespace to assume this role. Then we can add a permissions policy to this role. For example, we can give S3 read only access permission to this role.

EKS Pod Identity Webhook setup

Now we’ll go back to Kubernetes to set up EKS pod identity webhook. Its latest version relies on cert manager, so we need to deploy cert manager first.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml

Then checkout the source code from its github project and run the command below to deploy.

make cluster-up IMAGE=amazon/amazon-eks-pod-identity-webhook:latest

Due to the recent changes we can press ctrl-c when we see ‘Waiting for CSR…’. There is a pull request to fix that but it hasn’t been merged yet at the time of writing this article.

How to use ServiceAccount annotations to access AWS

Up to here everything is set up and we can do some tests.

Firstly create a service account in the default namespace to assume the IAM role we created earlier. We assume the role’s name is K8SS3ReadOnly.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: s3sa
  namespace: default
  annotations:
    eks.amazonaws.com/role-arn: "arn:aws:iam::xxxxxx:role/K8SS3ReadOnly"
    # optional: Defaults to "sts.amazonaws.com" if not set
    eks.amazonaws.com/audience: "sts.amazonaws.com"
    # optional: When set to "true", adds AWS_STS_REGIONAL_ENDPOINTS env var
    #   to containers
    eks.amazonaws.com/sts-regional-endpoints: "true"
    # optional: Defaults to 86400 for expirationSeconds if not set
    #   Note: This value can be overwritten if specified in the pod 
    #         annotation as shown in the next step.
    eks.amazonaws.com/token-expiration: "86400"

Then we create a pod to use this service account to access AWS services.

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: my-pod
  name: my-pod
spec:
  serviceAccountName: s3sa
  initContainers:
  - image: amazon/aws-cli
    name: my-aws-cli
    command: ['aws', 's3', 'ls', 's3://']
  containers:
  - image: nginx
    name: my-pod
    ports:
    - containerPort: 80
  dnsPolicy: ClusterFirst
  restartPolicy: Always

In the init container of this pod, it uses aws command to list all buckets in S3. Since the pod uses a service account which can assume a role in IAM, we don’t need to provide any secret key and access key to the pod. When the pod is running, we can use kubectl logs to check the output of that init container, where we should see a list of buckets in S3.

kubectl logs my-pod -c my-aws-cli
2022-09-27 05:09:44 charmed-k8s-oidc
......

On-prem Setup

Lastly, if the Kubernetes cluster is not running on AWS, e.g. in an on-prem data centre, it is necessary to specify a default region in EKS webhook plugin. To do that, edit Kubernetes deployment pod-identity-webhook and add “- –aws-default-region=[region-name, e.g. us-east-1]” to /webhook command as an argument.

kubectl get deploy pod-identity-webhook -o yaml
......
    spec:
      containers:
      - command:
        - /webhook
        - --in-cluster=false
        - --namespace=default
        - --service-name=pod-identity-webhook
        - --annotation-prefix=eks.amazonaws.com
        - --token-audience=sts.amazonaws.com
        - --aws-default-region=us-east-1
        - --logtostderr
......

This is to tell the plugin which AWS region it should use. If the Kubernetes cluster is running on AWS, the webhook plugin will use the region it is running on by default. But if the cluster is not running on AWS, it doesn’t know which region to use therefore we need to specify it in the deployment configuration.

Related posts


Michael C. Jaeger
29 April 2024

Kubernetes backups just got easier with the CloudCasa charm from Catalogic

Charms Article

For a native integration for Canonical’s Kubernetes platform, Juju was the perfect fit, and the charm makes consuming CloudCasa seamless for users. ...


Mita Bhattacharya
6 November 2024

Meet Canonical at KubeCon + CloudNativeCon North America 2024

Cloud and server Article

We are ready to connect with the pioneers of open-source innovation! Canonical, the force behind Ubuntu, is returning as a gold sponsor at KubeCon + CloudNativeCon North America 2024.  This premier event, hosted by the Cloud Native Computing Foundation, brings together the brightest minds in open source and cloud-native technologies. From ...


Serdar Vural
20 September 2024

Canonical and OpenAirInterface to collaborate on open source telecom network infrastructure

5G Article

Canonical is excited to announce that we are collaborating with OpenAirInterface (OAI) to drive the development and promotion of open source software for open radio access networks (Open RAN). Canonical will bring automation in software lifecycle management to OAI’s RAN stack, alongside additional infrastructure capabilities. This will be ...