Deploy Ghost on Lokomotive - Bare Metal

Its been a bit bumpy. Lokomotive (Git repo) as a K8s distribution was not my first choice.

I tried to update my previous install (coreos container linux, Typhoon) by shifting to fedora-coreos as coreos container linux is EOL, following the Redhat buyout of coreos.  My experience (ymmv) was that it was buggy, possibly because my hardware is showing its age, so it seemed a bit silly to persist. Next attempt was Typhoon with Flatcar Linux. I should have been able to make that work, but Typhoon is a single person project and just didn't seem to be moving along, no response to a request. It was good first time around.

So I took the leap and went off the reservation to try Lokomotive. I should note here that Flatcar Linux is a fork of container linux - its been taken on by Kinvolk, whilst fedora-coreos is a hybrid from fedora atomic merged with container linux.

And both Flatcar and Lokomotive are open source.

The reality is (IMHO) that IBM-Redhat is a commercially successful company that has an excellent business model that devalues the open source concept. Its direction is enterprise software as a source of profit and unless you're big and well-funded and want paid support, you don't exist. I work for one of their customers. What's happening with FOSS is troubling.

Lokomotive is not for the faint hearted. Without my previous experience, its probable that I would have chosen an easier path.  Rancher's k3s works great on my raspberry pi cluster.

At the moment, it seems that the developers of Lokomotive are racing to the next release as the documentation leaves a fair bit to be desired. The priority is fixing issues and adding features. Completely understandable.

This piece is designed to at least make an attempt to capture how to deploy a commonly used web application (Ghost) with the new tools deployed in Lokomotive, Contour and Envoy using Cert-Manager to provide Let's Encrypt certifcates for TLS security. Contour, Envoy and Cert-Manager are "components" of Lokomotive, which means that deployment is built in, the sparse documentation provides sufficient direction to deploy them without any extra tooling. (BTW - if you're reading this, then, clearly it works). Contour is an open source Kubernetes ingress controller providing the control plane for the Envoy edge and service proxy. They are if you like, part of the package, hence my interest in putting them to work. I could have gone down a different path and installed Traefik as a router, rather than Contour, but Contour looks interesting.

Background and Prerequisites

  • The hardware is described in this blog (4 thin dell wyse clients with an ssd jammed in) - x86, cheap and cheerful.
  • Installed Lokomotive - my install is bare-metal, which required matchbox (a good start on matchbox is the Typhoon documentation) and dnsmasq configured to do an ipx boot to install Flatcar Linux using Terraform. Terraform configuration is built in to lokoctl.
  • Metallb (another Lokomotive component)
  • Persistent storage - I used the standard nfs-client provisioner
  • Lens - described as the kubernetes IDE, its very useful as a way of avoiding becoming too familiar with kubectl commands. It provides helm repos - one of them being Ghost.
  • kubectl - the default kubernetes command line tool.
  • A network you manage - see below for details on mine - basically a subnet and a router connected to the Internet.

Steps

  1. After the initial Lokmotive install - install Lens on your workstation/desktop - its pretty smart - you just have to point it at the Lokomotive kubeconfig which will be at
~/lokomotive-infra/mybaremetalcluster/lokomotive-assets/cluster-assets/auth/kubeconfig

when you add the cluster. It gives a good insight into cluster status and provides Helm charts.

2. Install metallb - I used a Lens Helm chart as the default Lokomotive component defaults to BGP - my network is not that sophisticated, so I configured Layer 2.

3. Install a persistent volume provisioner - I used the nfs-client-provisioner - again I used  a Helm Chart available through the Lens app - note that you will want to set podSecurityPolicy:enabled: true as Lokomotive requires it. Kinvolk haven't included the nfs-client as a component, but it would be a good idea.

4. Install Contour and Cert-Manager components using lokocfg.

Optional: If you install the prometheus-operator component - you'll get some graphs in Lens which is a bonus - I haven't yet configured a grafana dashboard. I may circle back for that.

3. Add the Bitnami Helm repo to Lens - this way you can install a more upto date version of Ghost than comes in the Helm stable repo. Open the terminal window at the bottom of the Lens window - and enter:

helm repo add bitnami https://charts.bitnami.com/bitnami

4. Once the charts have refreshed - choose the Ghost chart from Bitnami and select install - this will open a window with the Chart values available to edit. Ctrl-a will select all the text in the window and it can be copied to a text file for editing. I found it useful to do this which enabled me to save the values, edit them and paste the result back into the window as I went through a number of iterations. If you wish to test metallb, the default attribute

service:
  type: LoadBalancer

can be retained, accept the defaults, configure the storage and Ghost will install and claim a LoadBalancer IP. You can then test access. However, for our purposes we would replace LoadBalancer with ClusterIP as we wish to put Ghost behind the Envoy proxy (the Envoy proxy has a LoadBalancer IP assigned when its deployed). Below is the values file I have used to achieve the objective set out at the beginning (extraneous comments have been edited out and identifying values altered - most attributes are left as the default) :

## Bitnami Ghost image version
## ref: https://hub.docker.com/r/bitnami/ghost/tags/
##
image:
  registry: docker.io
  repository: bitnami/ghost
  tag: 3.22.1-debian-10-r0
  ## Specify a imagePullPolicy
  ##
  pullPolicy: IfNotPresent
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    pullPolicy: Always
 
## Ghost protocol, host, port and path to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration
##
ghostProtocol: https
ghostHost: myghost.mydomain.com
ghostPort: 443
ghostPath: /

## User of the application
## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration
##
ghostUsername: myname@mailserver.com

## Application password
## Defaults to a random 10-character alphanumeric string if not set
## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration
##
ghostPassword: mypassword

## Admin email
## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration
##
ghostEmail: myname@mailserver.com

## Ghost Blog name
##
ghostBlogTitle: The Backyard Hacker
## ref: https://github.com/bitnami/bitnami-docker-wordpress#environment-variables
allowEmptyPassword: false
##
livenessProbe:
  enabled: true
  initialDelaySeconds: 120
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 6
  successThreshold: 1
readinessProbe:
  enabled: true
  initialDelaySeconds: 30
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 6
  successThreshold: 1

##
mariadb:
  ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
  enabled: true
  ## Disable MariaDB replication
  replication:
    enabled: false

  ## Create a database and a database user
  ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run
  ##
  db:
    name: bitnami_ghost
    user: bn_ghost
    ##
    password: mypassword
  ##
  rootUser:
    password: mypassword
  ##
  master:
    persistence:
      enabled: true
      ## mariadb data Persistent Volume Storage Class
      ##
      storageClass: nfs-client
      accessMode: ReadWriteOnce
      size: 8Gi
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
service:
  type: ClusterIP

  ## HTTP Port
  port: 80
  ##
  loadBalancerIP:

  ## nodePorts:
  ##   http: <to set explicitly, choose port between 30000-32767>
  nodePorts:
    http: ""

  ## Enable client source IP preservation
  ##
  externalTrafficPolicy: Cluster

  ## Service annotations. Evaluated as a template
  ##
  annotations: {}

## Pod Security Context
##
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

## Enable persistence using Persistent Volume Claims
##
persistence:
  enabled: true
  ##
  storageClass: nfs-client
  accessMode: ReadWriteOnce
  size: 8Gi
  path: /bitnami

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  requests:
    memory: 512Mi
    cpu: 300m

## Configure the ingress resource that allows you to access the
## Ghost installation. Set up the URL
##
ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  ## Set this to true in order to add the corresponding annotations for cert-manager
  certManager: false

  annotations: {}
  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  hosts:
    - name: ghost.local
      path: /

      ## Set this to true in order to enable TLS on the ingress record
      tls: false

      ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
      tlsSecret: ghost.local-tls

##
nodeSelector: {}

## Affinity for pod assignment (evaluated as a template)
affinity: {}
##
podAnnotations: {}

## Add sidecars to the pod
sidecars: []

## Add init containers to the pod
initContainers: []

## Array to add extra volumes
##
extraVolumes: []

## Array to add extra mounts (normally used with extraVolumes)
##
extraVolumeMounts: []

## An array to add extra env vars
extraEnvVars: []

## Name of a ConfigMap containing extra env vars
##
extraEnvVarsConfigMap:

## Name of a Secret containing extra env vars
##
extraEnvVarsSecret:

Of interest in the config are the ghostProtocol, ghostPort and ghostHost values. The example provides values that assume that access will be by ssl (https) . These values ensure that Ghost returns the correct url paths to the end user. If you set ghostProtocol to https but leave the ghostPort set to 80 the install will fail. Before clicking Install - note the bar at the bottom of the Lens windows - it provides a field to enter the Name for your deployment - a good idea to track your  release. Here we'll use myghost. The name will determine the kubernetes service name.

The install will deploy a mariadb container as a database. It will also deploy two persistent volume claims that will initialize two persistent volumes, one for Ghost and one for mariadb. If you decide to remove your Helm release as you test various configs, remember to delete the mariadb PV claim and the PV will disappear. Its not removed automatically in case you wish to reinitialise using the db. The Ghost PVC is removed.

Depending on the performance of your hardware, the deployment will take a few moments before its complete, as Ghost and mariadb initialise and connect - you can check the pod logs with Lens to see what's happening.

5. Once the deployment is complete the next step is to create the httpproxy. Basically this connects Ghost to Envoy (the proxy with a LoadBalancer IP).  To do this use the Terminal window in Lens. But first create an httpproxy config - here's an example:

apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
  annotations:
    
  generation: 2
  # managedFields:
    
  name: myghostproxy
  namespace: default
  
spec:
  routes:
    - conditions:
        - prefix: /
      services:
        - name: myghost
          port: 80
  virtualhost:
    fqdn: myghost.mydomain.com
    tls:
      secretname: myghost.local-prod-tls

Once your happy with the httpproxy config deploy it with:

kubectl apply -f /path/httpproxyconfig.yaml

5. Now its time to create an ingress that will invoke cert-manager to create a certificate request. Ingress config:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging
    ingress.kubernetes.io/force-ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
  name: myghostprod
  namespace: default
spec:
  rules:
  - host: myghost.mydomain.com
    http:
      paths:
      - backend:
          serviceName: myghost
          servicePort: 80
  tls:
  - hosts:
    - gh2.100flowers.tech
    secretName: myghost.local-prod-tls

Deploy with:

kubectl apply -f /path/ghostingress.yaml

Note in the ingress config that the cluster issuer is letsencrypt-staging. This will return a certficate and once deployed you will be able to access Ghost on the Envoy LoadBalancer IP but it will throw a security violation - staging is not trusted - its there as a method for testing that your system works. When you're happy that the process works ,redeploy the ingress but replace letsencrypt-staging with letsencrypt-production. There is some documentation on the Contour site that provides more detail.

6. Enable a dns entry for you Gost host - myname.mydomain.com on your external DNS. In my example the address I use is the static ip for my internet router. I then forward ports 443 and 80 from the router to the Envoy LoadBalancer IP - you can check this using Lens.

Further configuration of Ghost is provided at https://myghost.mydomain.com/ghost - have fun.

Network Details

Router: standard home router with broadband, wireless and ethernet switch. Provides NAT and virtual server forwarding.
Static IP: provided by ISP. A static IP is preferable for stability, although a dynamic dns entry can be configured with either the router, or a script.
Local subnet: Within the 192.168.xx/24 range
DNS/DHCP: I utilise a raspberry pi to host Dnsmasq to provide an internal dns and dhcp services.
Multi-port switch: To connect ethernet hosts and devices