The Charmed Distribution of Kubernetes® will run seamlessly on AWS. With the addition of the aws-integrator, your cluster will also be able to directly use AWS native features.

AWS integrator

The aws-integrator charm simplifies working with CDK on AWS. Using the credentials provided to Juju, it acts as a proxy between CDK and the underlying cloud, granting permissions to dynamically create, for example, EBS volumes.


If you use the recommended install method with conjure-up, the integrator charm will be installed by default, and trust granted automatically.

If you install CDK using the Juju bundle, you can add the aws-integrator at the same time by using the following overlay file (download it here):

    charm: cs:~containers/aws-integrator
    num_units: 1
  - ['aws-integrator', 'kubernetes-master']
  - ['aws-integrator', 'kubernetes-worker']

To use this overlay with the CDK bundle, it is specified during deploy like this:

juju deploy charmed-kubernetes  --overlay ~/path/aws-overlay.yaml

Then run the command to share credentials with this charm:

juju trust aws-integrator

... and remember to fetch the configuration file!

juju scp kubernetes-master/0:config ~/.kube/config

For more configuration options and details of the permissions which the integrator uses, please see the charm readme.

Using EBS volumes

Many pods you may wish to deploy will require storage. Although you can use any type of storage supported by Kubernetes (see the storage documentation), you also have the option to use the native AWS storage, Elastic Block Store (EBS).

First we need to create a storage class which can be used by Kubernetes. To start with, we will create one for the 'General Purpose SSD' type of EBS storage:

kubectl create -f - <<EOY
kind: StorageClass
  name: ebs-gp2
  type: gp2

You can confirm this has been added by running:

kubectl get sc

which should return:

NAME      PROVISIONER             AGE
ebs-gp2   39s

You can create additional storage classes for the other types of EBS storage if needed, simply give them a different name and replace the 'type: gp2' with a different type (See the AWS website for more information on the available types).

To actually create storage using this new class, you can make a Persistent Volume Claim:

kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
  name: testclaim
    - ReadWriteOnce
      storage: 100Mi
  storageClassName: ebs-gp2

This should finish with a confirmation. You can check the current PVCs with:

kubectl get pvc

...which should return something similar to:

NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
testclaim   Bound    pvc-54a94dfa-3128-11e9-9c54-028fdae42a8c   1Gi        RWO            ebs-gp2        9s

This PVC can then be used by pods operating in the cluster. As an example, the following deploys a busybox pod:

kubectl create -f - <<EOY
apiVersion: v1
kind: Pod
  name: busybox
  namespace: default
    - image: busybox
        - sleep
        - "3600"
      imagePullPolicy: IfNotPresent
      name: busybox
        - mountPath: "/pv"
          name: testvolume
  restartPolicy: Always
    - name: testvolume
        claimName: testclaim

Note: If you create EBS volumes and subsequently tear down the cluster, check with the AWS console to make sure all the associated resources have also been released.

Using ELB Loadbalancers

With the aws-integrator charm in place, actions which invoke a loadbalancer in Kubernetes will automatically generate an AWS Elastic Load Balancer. This can be demonstrated with a simple application. Here we will create a simple application running in five pods:

kubectl run hello-world --replicas=5 --labels="run=load-balancer-example"  --port=8080

You can verify that the application and replicas have been created with:

 kubectl get deployments hello-world

Which should return output similar to:

 hello-world      5/5               5                            5             2m38s

To create a LoadBalancer, the application should now be exposed as a service:

 kubectl expose deployment hello-world --type=LoadBalancer --name=hello

To check that the service is running correctly:

kubectl describe service hello

...which should return output similar to:

Name:                     hello
Namespace:                default
Labels:                   run=load-balancer-example
Annotations:              <none>
Selector:                 run=load-balancer-example
Type:                     LoadBalancer
LoadBalancer Ingress:
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31203/TCP
Endpoints:      ,, + 2 more...
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

You can see that the LoadBalancer Ingress is now associated with an ELB address in front of the five endpoints of the example deployment. Leaving a while for DNS propagation, you can test the ingress address:

Hello Kubernetes!

Note: If you create ELBs and subsequently tear down the cluster, check with the AWS console to make sure all the associated resources have also been released.

Note that if you subsequently decommission this service, it is useful to check through the AWS console that the ELB resources are also released.

Upgrading the integrator-charm

The aws-integrator is not specifically tied to the version of CDK installed and may generally be upgraded at any time with the following command:

juju upgrade-charm aws-integrator


If you have any specific problems with the aws-integrator, you can report bugs on Launchpad.

The aws-integrator charm makes use of IAM accounts in AWS to perform actions, so useful information can be obtained from Amazon's CloudTrail, which logs such activity.

For logs of what the charm itself believes the world to look like, you can use Juju to replay the log history for that specific unit:

juju debug-log --replay --include aws-integrator/0