Skip to content



The Ayedo Kubernetes Platform (AKP) builds a platform for cloud native applications on Kubernetes.

It's Ansible-based and allows you to deploy the platform on any of the supported Kubernetes providers.

AKP is a module of the Ayedo Cloud Stack (ACS) and uses a common directory to store output and configuration, the so called $acs_dir which defaults to ~/.acs.

ACS works with so called stacks. As such, each installation of the Ayedo Kubernetes Platform requires a $stack, just like HELM. All of AKP's output will be saved to the so called $stack_dir which defaults to $acs_dir/$stack.

To get started, head to the Quick Start section.


  • Built-In Ingress controller with Nginx
  • Built-In Loadbalancer for non-cloud environments with MetalLB
  • Built-In TLS with cert-manager
  • Built-In Metrics-Aggregation with Prometheus
  • Built-In Log-Aggregation loki
  • Built-In Distributed Tracing with Tempo
  • Built-In Application Insights with Grafana
  • Built-In Service Mesh with LinkerD
  • Built-In OCI Registry with Harbor
  • Built-In Continuous Integration with Gitlab
  • Built-In Continuous Delivery with ArgoCD
  • Built-In Functions-as-a-Service with Fission
  • Built-In DNS Automation with external-dns

Supported Kubernetes Providers


To successfully use this playbook, you need the following tools installed to the control machine:

Install requirements

pip3 install -r requirements.txt

Get started

We will work with a new installation that we'll call mycluster. This gives us the following base-config:

  • stack_dir=~/.acs/

INFO: AKP expects your configuration and inventory to be in this directory

NOTE: You need a Kubernetes cluster (ideally FRESH / EMPTY or PRE-CONFIGURED from this repository) to work through this successfully. You can build clusters from scratch with the Ayedo Cloud Stack.

To run this Playbook against a specific Kubernetes cluster, all you have to do is to point your local tooling (helm, kubectl, Ansible) to the respective cluster.

This can be done by setting the KUBECONFIG environment variable to the config file for your cluster.

NOTE: typically on creating a new cluster, you should be given the option to download its kubeconfig-file to your machine. We assume that you have a kubeconfig in ~/.acs/, generated by the Ayedo Kubernetes Engine.

export KUBECONFIG=~/.acs/

or, in case you already have a context in your kubeconfig:

kubectl config use-context mycluster

Create your configuration files

In your $stack_dir, create your configuration file:

edit ~/.acs/

Setup your cluster configuration:

# ~/.acs/
  enabled: true

INFO: You can see the defaults here.

Use with Docker

docker run -e "STACK_DIR=/root/.acs/" -v ~/.acs/ ansible-playbook install.yml

Use manually

export STACK_DIR=~/.acs/
ansible-playbook install.yml

This deploys the whole platform with default configuration (see defaults.yml).

NOTE: If you're deploying to bare-metal or your Kubernetes has been set up using minikube, you'll need to install MetalLB to have a loadbalancer in the cluster

The platform will save its outputs to a directory specified in $stack_dir which defaults to $acs_dir/$stack.


Default configuration options can be found in defaults.yml. This file will be loaded by install.yml. Additional configuration options can be set in multiple ways:

  • creating user configuration in ~/.acs/ AKP picks up user-config automatically from there.

Configuration options

Configuration Option Description Default
in_ci Asserts if running in CI environments False
stack Name for this release -
stack_dir Location of the stack lookup EnvVar STACK_DIR Name of the stack -
argocd.enabled Enable/Disable ArgoCD True
cert_manager.enabled Enable/Disable cert-manager True
cert_manager.version Version of cert-manager to install v1.1.0
cert_manager.letsencrypt.mail The email address used for LetsEncrypt certificates (required when using the LetsEncrypt Issuers) -
cert_manager.letsencrypt.staging_issuer.enabled Enable/Disable a staging issuer for LetsEncrypt True
cert_manager.letsencrypt.prod_issuer.enabled Enable/Disable a production issuer for LetsEncrypt True
cert_manager.ca_issuer.enabled Enable/Disable a CA issuer to power encryption of inter-service communication False
cert_manager.ca_issuer.certificate_path The path to the CA certificate used to power the CA issuer $stack_dir/cert-manager/ca.crt
cert_manager.ca_issuer.key_path The path to the CA key used to power the CA issuer $stack_dir/cert-manager/ca.key
cert_manager.route53.enabled Enable/Disable Route53 integration CERT_MANAGER_ROUTE53_ENABLED
cert_manager.route53.access_key_id Route53 access key id -
cert_manager.route53.secret_access_key Route53 integration secret access key -
external_dns.enabled Enable ExternalDNS EXTERNAL_DNS_ENABLED ExternalDNS Cloudflare Email -
external_dns.cloudflare.api_key ExternalDNS Cloudflare API Key -
gitlab.enabled Enable GitLab False
gitlab.domain The domain for GitLab -
harbor.enabled Enable Harbor False
harbor.core.domain The domain for Harbor Core -
harbor.notary.domain The notary for Harbor Core -
harbor.admin_password The Harbor admin password harbor123456
harbor.config_overrides Config Overrides for the Harbor HELM Chart -
linkerd.enabled Enable/Disable LinkerdD True
loki.enabled Enable/Disable Grafana Loki True
metallb.enabled Enable/Disable MetalLB False
nginx_ingress.enabled Enable/Disable nginx ingress True
postgres.enabled Enable/Disable Postgres False
postgres.username The username for the default postgres user that will be created on deployment postgres
postgres.password The password for the default postgres password that will be created on deployment postgres
prometheus.enabled Enable/Disable Prometheus & Grafana True
tempo.enabled Enable/Disable Grafana Temp True

Build image

build -t akp-custom .


The install.yml playbook contains all platform-level components for the Ayedo Kubernetes Platform. Except nginx-ingress, all components are agnostic in that their target-system is an arbitrary Kubernetes cluster. nginx-ingress expects some form of loadbalancer to exist (which is given with managed Kubernetes services in the cloud) and extra-configuration (metallb.enabled=true) will be necessary to make it on non-cloud Kubernetes clusters.

All necessary deployment steps have been worked into the Ansible playbook install.yml. Components can be enabled/disabled and configured with configuration options.


cert-manager handles all things SSL and certificates. In this context, it has 2 main purposes:

  1. issue (and re-issue) valid certificates from LetsEncrypt (staging or production) for services registering endpoints with nginx-ingress
  2. regularly re-issue the LinkerD Trust Anchor / Identity certificate

If cert_manager_route53_enabled is True (which is not the default), the playbook creates a secret called cert-manager-route53-credentials-secret in the cert-manager - this secret contains the AWS credentials (see configuration on how to specify these credentials) needed for the Route53 integration.


Grafana Loki is being used for centralized logging. This component will install loki alongside promtail to capture cluster-wide logs.

Grafana (as part of Prometheus) is pre-configured with loki as a datasource so logs can easily be explored or worked into diagrams.

Loki is also integrated with Tempo to automatically link TracingIDs logged to Loki to Tempo.


In Kubernetes, the ingress controls incoming traffic and allows you to manipulate routing (and more). We're using nginx as the ingress for 2 reasons:

  • it's the most used and most stable while still simplest implementation of an ingress and widely adapted (integration work is low)
  • it comes with no special configuration / CRD requirements and integrates easily with cert-manager


Postgres can be used as a central database instance for all services. Default username and password can be configured using postgres_username and postgres_password.

Using postgres_additional_objects you can provide a list of databases and user-credentials that should be created on top of the defaults.

PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:


To get the password for "postgres" run:

export POSTGRES_PASSWORD=$(kubectl get secret --namespace postgres postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)

To connect to your database run the following command:

kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace postgres --image --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432

To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace postgres svc/postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host -U postgres -d postgres -p 5432


Prometheus collects a vast number of metrics about the cluster and all running containers out of the box. We're using Prometheus Operator - bundled with Grafana and all necessary collectors in kube-prometheus-stack - instead of the legacy Prometheus stack. That means deployments will not be tracked/discovered via annotations but instead through so called ServiceMonitors. A generic ServiceMonitor will be created that automatically captures metrics for all services with this label: akp-app: $APPNAME where $APPNAME can be an arbitrary value describing your app.

Additional ServiceMonitors can be created as part of a HELM chart (or manifest) and will be taken into account by Prometheus if they have the following label: akp-monitoring: enabled.

Prometheus can be utilized by Lens and other components of the cluster as well.


The Kubernetes cluster can be integrated with GitLab to allow easy deloyments into GitLab-controlled environments.

The playbook will guide you (mostly) through the process of connecting your current cluster to GitLab.

NOTE: this isn't necessary (or useful) with your Docker Desktop Kubernetes cluster

In case you want to do the process manually:


On the machine you're controlling the cluster from perform the following steps:

  • Get the Kubernetes API URL: kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
  • Add the API URL from the output to the cluster you're configuring in GitLab
  • List secrets: kubectl get secrets
  • Look for a secret named default-token-xxxx
  • Run kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 --decode where should be substituted with your default-token, e.b. default-token-6x2kk
  • Add the certificate from the output to the cluster you're configuring in GitLab


linkerd is the ServiceMesh of our choice. It's simple and easy to maintain.

To enable the ServiceMesh for a Namespace or Deployment, simply set the enabled Kubernetes annotation.

See more here:




MetalLB is a Loadbalancer for "bare metal" scenarios.


ArgoCD is a Continuous Delivery platform based on GitOps principles.


Tempo is Grafana's tool for distributed tracing. It's compatible with Jaeger and OpenTelemetry. Usage differs depending on your programming language of choice.

See more here:

Tempo is automatically integrated with Grafana and Loki.


  • cert-manager sometimes fails to create Certificate Issuers, outputting something like Post "https://cert-manager-webhook.akp.svc:443/mutate?timeout=10s": x509: certificate signed by unknown authority\"}]},\"code\":500}\\n'", "reason": "Internal Server Error", "status": 500} - this can be mitigated by simply running the command again

Last update: October 18, 2021