Wordpress on EKS

This tutorial follows on from the one where we ran Wordpress on Minikube. If you haven’t already taken that tutorial it’d be worthwhile taking it before working through this one.

In this tutorial we’ll deploy 2 Wordpress sites to an AWS EKS cluster. We’ll create 2 AuroraDB serverless databases – one for each Wordpress site – a load balancer and Route53 DNS records. The Wordpress sites will be served over HTTPS so we’ll create some SSL certificates too. For simplicity, the Wordpress sites and Kubernetes API server will be publically accessible. Note it’s bad practice to expose your Kubernetes API server to the Internet. A future tutorial will address this, but for now we’ll keep things simple.

Because we’ll be creating publically accessible resources you’ll need to own a domain name and have a hosted zone for it. The assumption is that the hosted zone will be in the same AWS account you’ll be deploying the cluster into. If you don’t own a domain you can use you could just read through this tutorial. Another tutorial will create an entirely private cluster which will work even if you don’t own a domain.

Depending on your AWS account it may cost you some money to follow this tutorial. After tearing down the cluster make sure to confirm all resources were deleted by checking the AWS console just to be 100% sure.

We have a docker image that contains everything you need for this tutorial. If you don’t want to install stuff on your local machine, just run our image with docker run -it sugarkube/tutorial:0.10.0 /bin/bash.

Wow that’s colourful. Let’s start!


If you’re in a hurry and just want to try Sugarkube out without going through a long tutorial, just export your AWS credentials and run the following commands. If you’ve got time or want to learn more about Sugarkube, skip this whole TLDR section.

git clone https://github.com/sugarkube/sample-project.git
cd sample-project
git checkout tutorial-0.10.0
export AWS_DEFAULT_REGION=eu-west-2
sugarkube ws create stacks/account-setup.yaml account-setup workspaces/account-setup/

Edit the value of the domain key in providers/aws/values.yaml. Set it to a domain you own and which you have a hosted zone for in your AWS account. Then continue with these commands:

sugarkube kapp install stacks/account-setup.yaml account-setup workspaces/account-setup/ --one-shot
sugarkube ws create stacks/web.yaml dev-web workspaces/dev-web/
sugarkube kapps install stacks/web.yaml dev-web workspaces/dev-web/ --one-shot --run-actions -v

Explore the EKS cluster with kubectl to see which ingresses are running and open them using HTTPS. When you’re done, tear down everything with:

sugarkube kapps delete stacks/web.yaml dev-web workspaces/dev-web/ --one-shot --run-actions
sugarkube kapps delete stacks/account-setup.yaml account-setup workspaces/account-setup/ --one-shot

Grab the sample project

If you haven’t already got the sample project on your machine from the Wordpress on Minikube tutorial you’ll need to clone it like before. If you’ve already got it you can skip this step:

git clone https://github.com/sugarkube/sample-project.git
cd sample-project
git checkout tutorial-0.10.0

Account setup

Before we can create our cluster we’ll need to perform some one-off tasks to prepare our AWS account.

Since we want to create publically accessible services we’ll need a public hosted zone. We’re going to create a hosted zone to act as a container for all the clusters we create with Sugarkube to keep them nicely namespaced from everything else. In my case that’ll be k8s.sugarkube.io, but you’ll be able to change that in a minute. We’ll also need an ACM certificate to serve our Wordpress sites over HTTPS.

Wherever possible we’ll use Terraform to create AWS resources. It has its limitation so we can’t always use it, but it makes sense to use it where we can. Terraform can store its state in an S3 bucket to simplify using it in a team. So as part of this one-off account setup step we’ll create an encrypted S3 bucket for Terraform to store its state in, but it’ll only be used for the public hosted zone resources. Clusters will also create their own isolated S3 state buckets for Terraform.

If you check in the stacks directory there’s a file called account-setup.yaml. It contains a stack that doesn’t use a provisioner, and only a single manifest. A cleaned up copy of the stack definition is below:

  cluster: setup      # used for namespacing resources
  provider: aws
#  account: dev       # must be given on the command line
#  profile: dev       # must be given on the command line
  region: eu-west-2
  provisioner: none
    - ../providers/  
    - uri: ../manifests/web/prelaunch.yaml      
            parent_hosted_zone: "{{ .domain }}"
            hosted_zone: "{{ .root_hosted_zone }}"
            create_acm_certs: true   
            - none:
            - none:

So what we’ve got here is a stack that requires additional parameters setting on the command line because both account and profile are required. It also overrides some variables and disables actions. If you check the manifests/web/prelaunch.yaml, you’ll see the public-hosted-zone kapp is configured like this:

  - id: public-hosted-zone
      - "{{ eq .stack.provider \"aws\" }}"      # only run when using AWS
      - uri: https://github.com/sugarkube/kapps.git//incubator/public-hosted-zone#public-hosted-zone-0.1.0
      parent_hosted_zone: "{{ .root_hosted_zone }}"
      hosted_zone: "{{ .cluster_hosted_zone }}"
      - cluster_update:     
      - cluster_delete:     

So as you may expect, ordinarily this kapp would take different variables. It’d also instruct Sugarkube to create a cluster after the kapp has been installed, and delete it before deleting this kapp. However, our stack config overrides this to stop it creating or deleting clusters at all since we’re not working with individual clusters just yet.

This is another example that shows how stack configs can override kapp configs in manifests. But in reality in this case it’d be safer to copy the manifest and only use it with this stack. We should very rarely need to repeat what we’re doing now and sharing the manifest means it could evolve over time which could result in this stack config breaking without our knowledge. But we just wanted to show how values could be overridden.


OK so, what are the actual values of those variables? Before we can find out we need to export AWS credentials to the shell, then create a workspace for this stack config.

Our sample clusters are configured to be created in eu-west-2. You’ll probably save time by just using that region regardless of where you are.

export AWS_DEFAULT_REGION=eu-west-2
export AWS_ACCESS_KEY_ID=<your API ID>
export AWS_SECRET_ACCESS_KEY=<your API key>
sugarkube ws create stacks/account-setup.yaml account-setup workspaces/account-setup/

It’ll take a little longer than last time:

Create a workspace

Sugarkube does an initial pass over the DAG to generate any outputs and render any templates it can. Terraform kapps will run terraform init which can take a while, so this can be disabled with the --no-template flag. However, without rendering templates we may see errors when using the kapp vars command, so the default of rendering them may be a bit slower but it’s safer. Templates can be generated on-demand with the kapp template command so you can regenerate them whenever you want.

How do we know Sugarkube executes terraform init? Run steps contain the mappings between identifiers like tf-init and the actual commands they execute. Like other kapp config values they can be declared in multiple places. The tf-init run step for prelaunch:terraform-bucket is in it’s sugarkube.yaml file and prelaunch:public-hosted-zone uses the project defaults in sugarkube-conf.yaml. Both do the same and call terraform init.

After doing let’s try kapp vars command:

sugarkube kapp vars stacks/account-setup.yaml account-setup workspaces/account-setup/

Kapp vars error

Uh-oh it’s failed again. Terraform is complaining that the bucket configured for the backend for the prelaunch:public-hosted-zone kapp doesn’t exist yet, and it doesn’t. To get any output from kapp vars we’ll need to pass the --no-outputs flag to stop Sugarkube loading outputs at all. Let’s use it to search for the value of that .domain variable we saw above:

sugarkube kapp vars stacks/account-setup.yaml account-setup workspaces/account-setup/ --no-outputs -i prelaunch:public-hosted-zone | grep domain

This quickly prints out:

domain: sugarkube.io
domain: sugarkube.io

It doesn’t matter that it prints it out twice, it just appears twice in the output from kapp vars.

sugarkube.yaml files are golang templates. To refer to variables in a template you need to use a leading .. Sugarkube loads values for template variables from YAML, which doesn’t use a leading .. So although we’re searching for the value of the {{ .domain }} tag, we need to grep for the variable name without a leading ., i.e. domain.

So using this approach, let’s quickly see how the kapp would normally be parameterised compared to the values set in the stack config:

Variable Default value Stack value
parent_hosted_zone k8s.sugarkube.io sugarkube.io
hosted_zone setup.k8s.sugarkube.io k8s.sugarkube.io

OK so it’s going to run passing sugarkube.io as the value of parent_hosted_zone and k8s.sugarkube.io as the value of hosted_zone.

To change the value of the sugarkube.io domain to one you own, just grep in the directories configured under provider_vars_dirs for domain:

grep -r domain providers

The string appears in two places:

providers/aws/values.yaml:domain: sugarkube.io
providers/aws/values.yaml:root_hosted_zone: k8s.{{ .domain }}

Since the actual value is defined in a single place we obviously need to change it there. If it appeared in multiple places you’d need to understand how Sugarkube loads config files depending on a stack’s parameters. Notice how the value of root_hosted_zone references the domain variable. This makes Sugarkube’s config system incredibly flexible and easy to override with hierarchical configuration if you want to go down that route.

Edit the value of the domain key in providers/aws/values.yaml. If you rerun kapp vars you should see your domain now:

sugarkube kapp vars stacks/account-setup.yaml account-setup workspaces/account-setup/ --no-outputs -i prelaunch:public-hosted-zone | grep hosted_zone

OK let’s try a dry run installation:

sugarkube kapp install stacks/account-setup.yaml account-setup workspaces/account-setup/ -n

Install dry run

Another error. Sorry, that’s the best we’ve got for that kind of error right now. The important part to note is this bit right at the end:

 <.outputs.this.terraform.name_servers.value>: nil pointer evaluating interface {}.name_servers

An output doesn’t have a value and it’s throwing off Go’s templater. This isn’t really a bug as such (although we do plan to smooth this over in a future release). It’s indicative of a valid error case. If you remove the -n flag and try planning the installation you’ll get a different error from prelaunch:public-hosted-zone:

Error: Failed to get existing workspaces: S3 bucket does not exist.

Similar to what we saw in the Wordpress on Minikube tutorial there are limits to what can be planned. Terraform can’t plan actions if the bucket for its backend doesn’t exist. We have a choice of 2 solutions:

  1. Install only the prelaunch:terraform-bucket by using a selector – we’d pass -i prelaunch:terraform-bucket to kapp install.
  2. Use the --one-shot flag like we did last time. This is generally simpler and easier for installations, so let’s just use that:

    sugarkube kapp install stacks/account-setup.yaml account-setup workspaces/account-setup/ --one-shot

Here’s a slightly sped-up recording:

Install setup

OK that’s great. Everything installed. We should now have a public hosted zone with short TTLs and the parent hosted zone should have NS records in it pointing to the new hosted zone. We should also have a validated wildcard ACM certificate and an S3 bucket for Terraform.

We haven’t mentioned it so far but the prelaunch:terraform-bucket kapp does some pretty clever jiggery-pokery. First it has to use a local state file while creating the S3 bucket, then copy the state to the bucket before reconfiguring Terraform to use that bucket in future. It does the opposite when the kapp is deleted. Fortunately all the complexity for that little dance is contained within that single kapp. Other kapps just expect there to be a remote bucket and use it, which works fine provided the prelaunch:terraform-bucket kapp has been run.

Creating the cluster

OK, let’s blast through creating a cluster on EKS. You’re hopefully starting to get the workflow. Let’s create a workspace for the dev-web stack in the stacks/web.yaml stack config file:

sugarkube ws create stacks/web.yaml dev-web workspaces/dev-web/

We saw in the last tutorial the dependencies for this cluster:

dev-web dependencies

So let’s create it. As you hopefully know by now, the simplest way to create a cluster is with --one-shot. This cluster also contains actions, so let’s just just kick it off with everything it needs (if you get validation errors telling you you’re missing requirements, you’ll need to install those). For variety let’s enable verbose output:

sugarkube kapps install stacks/web.yaml dev-web workspaces/dev-web/ --one-shot --run-actions -v

After a short while Sugarkube will delegate to eksctl to actually create the EKS cluster as you can see in the sped up recording below. The config that will actually be passed to eksctl is in providers/aws/eks.yaml – we’ll go more into how Sugarkube searches and loads configs for provisioners in the next tutorial.

Note that although we already installed the prelaunch kapps, Sugarkube will rerun the plan_install and apply_install run units for them on this invocation. Sugarkube doesn’t maintain its own state of what’s been installed. What’s installed/deleted is entirely controlled by selectors and the DAG for the target stack. That’s why it’s important kapps can have their installers rerun multiple times without adverse effects. Installing kapps must be idempotent. The same is not true of deletion though. Kapps can fail if Sugarkube attempts to delete an already deleted kapp.

EKS create

OK great. We can see there are 2 namespaces, one for wordpress-site1 and one just called site2. If you check in the AWS console you should see 2 AuroraDB serverless databases. And you can hit the ingress URLs https://site1.devweb1.k8s.<your_domain> and https://site1.devweb1.k8s.<your_domain>, making sure to accept the TLS certificates. Note that this time Sugarkube didn’t install fixture data because that’s how the Wordpress’s sugarkube.yaml file is configured. The run_units for the fixtures key contains a condition which evaluates to false for this cluster:

  - "{{ eq .stack.provider \"local\" }}"        # only run when using the local provider

Have a play about in the AWS console to see what’s been created. Notice that the type of the nginx kapp is LoadBalancer and both Wordpress sites use it. Traffic is routed into the cluster through this load balancer. There’s also another S3 bucket for Terraform state.

When’re you’re done, tear the cluster down and all the infrastructure. The fastest way is with the following command, but you don’t have to use --one-shot. You could get rid of that flag to plan the deletion then apply it by passing -y.

sugarkube kapps delete stacks/web.yaml dev-web workspaces/dev-web/ --one-shot --run-actions

Here’s a sped up recording:

EKS delete

If you want to delete the infrastructure we created to setup the account, run this too:

sugarkube kapps delete stacks/account-setup.yaml account-setup workspaces/account-setup/ --one-shot

Please verify that all AWS resources created by Sugarkube have in fact been deleted from your AWS account to avoid unexpected charges.


This tutorial has shown how to prepare an AWS account for use with Sugarkube. It then went on to show you how to inspect variables for kapps. Then we brought up an EKS cluster running 2 Wordpress sites backed by AuroraDB RDS databases, and finally tore it all down again.

From here check out our other tutorials or learn more about Sugarkube’s concepts.