26.23. krib

The following documentation is for krib content package at version v1.13.1-tip-42-5e060684d6a9a1201a792c7567e894ad45f56554.

26.24. KRIB (Kubernetes Rebar Integrated Bootstrapping)

License: KRIB is APLv2

This document provides information on how to use the Digital Rebar KRIB content add-on. Use of this content will enable the operator to install Kubernetes in either a Live Boot (immutable infrastructure pattern) mode, or via installed to local hard disk OS mode.

KRIB uses the kubeadm cluster deployment methodology coupled with Digital Rebar enhancements to help proctor the cluster Master election and secrets management. With this content pack, you can install Kubernetes in a zero-touch manner.

KRIB does also support production, highly available (HA) deployments, with multiple masters. To enable this configuration, we’ve chosen to manage the TLS certificates and etcd installation in the Workflow instead of using the kubeadm process.

This document assumes you have your Digital Rebar Provisioning endpoint fully configured, tested, and working. We assume that you are able to properly provision Machines in your environment as a base level requirement for use of the KRIB content add-on use. See _rs_krib for step by step instructions.

26.24.1. KRIB Video References

The following videos have been produced or presented by RackN related to the Digital Rebar KRIB solution.

26.24.2. Online Requirements

KRIB uses community kubeadm for installation. That process relies on internet connectivity to download containers and other components.

26.24.3. Immutable -vs- Local Install Mode

The two primary deployment patterns that the Digital Rebar KRIB content pack supports are:

  1. Live Boot (immutable infrastructure pattern - references [1] [2])
  2. Local Install (standard install-to-disk pattern)

The Live Boot mode uses an in-memory Linux image based on the Digital Rebar Sledgehammer (CentOS based) image. After each reboot of the Machine, the node is reloaded with the in-memory live boot image. This enforces the concept of immutable infrastructure - every time a node is booted, deployed, or needs updating, simply reload the latest Live Boot image with appropriate fixes, patches, enhancements, etc.

The Local Install mode mimics the traditional “install-to-my-disk” method that most people are familiar with.

26.24.4. KRIB Basics

KRIB is a Content Pack addition to Digital Rebar Provision. It uses the Multi-Machine Cluster Pattern which provides atomic guarantees. This allows for Kubernetes master(s) to be dynamically elected, forcing all other nodes to wait until the kubeadm on the elected master to generate an installation token for the rest of the nodes. Once the Kubernetes master is bootstrapped, the Digital Rebar system facilitates the security token hand-off to rest of the cluster so they can join without any operator intervention.

26.24.5. Elected -vs- Specified Master

By default, the KRIB process will dynamically elect a Master for the Kubernetes cluster. This masters simply win the race-to-master election process and the rest of the cluster will coalesce around the elected master.

If you wish to specify a specific machines to be the designated masters, you can do so by setting a Param in the cluster Profile to the specific Machine that will be come the master. To do so, set the krib/cluster-masters Param to a JSON structure with the Name, UUID and IP of the machines to become masters. You may add this Param to the Profile in the below specifications, as follows:

# JSON reference to add to the Profile Params section
"krib/cluster-masters": [{"Name":"<NAME>", "Uuid":"<UUID>", "Address": "<ADDRESS>"}]

# or drpcli command line option
drpcli profiles set my-k8s-cluster param krib/cluster-master to <JSON>

The Kubernetes Master will be built on this Machine specified by the <UUID> value.

Note

This MUST be in the cluster profile because all machines in the cluster must be able to see this parameter.

26.24.6. Install KRIB

KRIB is a Content Pack and is installed in the standard method as any other Contents. We need the krib.json content pack to fully support KRIB and install the helper utility contents for stage changes.

Please review _rs_krib for step by step instructions.

26.24.6.1. CLI Install

KRIB uses the Certs plugin to build TLS, you can install that from the RackN library

# download cert plugin provider (installs the plugin autmatically)
curl -o certs https://s3-us-west-2.amazonaws.com/rebar-catalog/certs/v2.4.0-0-02301d35f9f664d6c81d904c92a9c81d3fd41d2c/amd64/linux/certs
# install plugin provider
drpcli plugin_providers upload certs from certs
# verify it worked - should return true
drpcli plugins show certs | jq .Available

Using the Command Line (drpcli) utility configured to your endpoint, use this process:

# Get code
git clone https://github.com/digitalrebar/provision-content
cd krib

# KRIB content install
drpcli contents bundle krib.yaml
drpcli contents upload krib.yaml

If KRIB is already installed, you should replace the upload with drpcli contents update krib krib.yaml

26.24.6.2. UX Install

In the UX, follow this process:

  1. Open your DRP Endpoint: (eg. https://127.0.0.1:8092/ )
  2. Authenticate to your Endpoint
  3. Login with your `RackN Portal Login` account (upper right)
  4. Go to the left panel “Content Packages” menu
  5. Select Kubernetes (KRIB: Kubernetes Rebar Immutable Bootstrapping) from the right side panel (you may need to select Browser for more Content or use the Catalog button)
  6. Select the Transfer button for both content packs to add the content to your local Digital Rebar endpoint

26.24.7. Configuring KRIB

The basic outline for configuring KRIB follows the below steps:

  1. create a Profile to hold the Params for the KRIB configuration (you can also clone the krib-example profile)
  2. add a Param of name krib/cluster-profile to the Profile you created
  3. add a Param of name etcd/cluster-profile to the Profile you created
  4. apply the Profile to the Machines you are going to add to the KRIB cluster
  5. change the Workflow on the Machines to krib-live-cluster for memory booting or krib-install-cluster to install to Centos. You may clone these reference workflows to build custom actions.
  6. installation will start as soon as the Workflow has been set.

There are many configuration options available, review the krib/* and etcd/* parameters to learn more.

26.24.7.1. Configure with Terraform

Please review the intergrations/krib for example Terraform plans.

26.24.7.2. Configure with the CLI

The configuration of the Cluster includes several reference Workflow that can be used for installation. Depending on which Workflow you use, will determine if the cluster is built via install-to-local-disk or via an immutable pattern (live boot in-memory boot process). Outside of the Workflow differences, all remaining configuration elements are the same.

You must writeable create a Profile from YAML (or JSON if you prefer) with the Params stagemap and param required information. Modify the Name or other fields as appropriate - be sure you rename all subsequent fields appropriately.

echo '
---
Name: "my-k8s-cluster"
Description: "Kubernetes install-to-local-disk"
Params:
  krib/cluster-profile: "my-k8s-cluster"
  etcd/cluster-profile: "my-k8s-cluster"
Meta:
  color: "purple"
  icon: "ship"
  title: "My Installed Kubernetes Cluster"
' > /tmp/krib-config.yaml

drpcli profiles create - < /tmp/krib-config.yaml

Note

The following commands should be applied to all of the Machines you wish to enroll in your KRIB cluster. Each Machine needs to be referenced by the Digital Rebar Machine UUID. This example shows how to collect the UUIDs, then you will need to assign them to the UUIDS variable. We re-use this variable throughout the below documentation within the shell function named my_machines. We also show the correct drpcli command that should be run for you by the helper function, for your reference.

Create our helper shell function my_machines
function my_machines() { for U in $UUIDS; do set -x; drpcli machines $1 $U $2; set +x; done; }
List your Machines to determine which to apply the Profile to
drpcli machines list | jq -r '.[] | "\(.Name) : \(.Uuid)"'
IF YOU WANT to make ALL Machines in your endpoint use KRIB, do:
export UUIDS=`drpcli machines list | jq -r '.[].Uuid'`
Otherwise - individually add them to the UUIDS variable, like:
export UUIDS="UUID_1 UUID_2 ... UUID_n"

Add the Profile to your machines that will be enrolled in the cluster

my_machines addprofile my-k8s-cluster

# runs example command:
# drpcli machines addprofile <UUID> my-k8s-cluster

Change stage on the Machines to initiate the Workflow transition. YOU MUST select the correct stage, dependent on your install type (Immutable/Live Boot mode or install-to-local-disk mode). For Live Boot mode, select the stage ssh-access and for the install-to-local-disk mode select the stage centos-7-install.

# for Live Boot/Immutable Kubernetes mode
my_machines workflow krib-live-cluster

# for intall-to-local-disk mode:
my_machines workflow krib-install-cluster

# runs example command:
# drpcli machines workflow <UUID> krib-live-cluster
# or
# drpcli machines workflow <UUID> krib-install-cluster

26.24.7.3. Configure with the UX

The below example outlines the process for the UX.

RackN assumes the use of CentOS 7 BootEnv during this process. However, it should theoretically work on most of the BootEnvs. We have not tested it, and your mileage will absolutely vary…

  1. create a Profile for the Kubernetes Cluster (e.g. my-k8s-cluster) or clone the krib-example profile.

2. add a Param to that Profile: krib/cluster-profile = my-k8s-cluster 2. add a Param to that Profile: etcd/cluster-profile = my-k8s-cluster 3. Add the Profile (eg my-k8s-cluster) to all the machines you want in the cluster. 4. Change workflow on all the machines to krib-install-cluster for install-to-local-disk, or to krib-live-cluster for the Live Boot/Immutable Kubernetes mode

Then wait for them to complete. You can watch the Stage transitions via the Bulk Actions panel (which requires RackN Portal authentication to view).

Note

The reason the Immutable Kubernetes/Live Boot mode does not need a reboot is because they are already running Sledgehammer and will start installing upon the stage change.

26.24.8. Operating KRIB

26.24.8.1. Who is my Master?

If you have not specified who the Kubernetes Master should be; and the master was chosen by election - you will need to determine which Machine is the cluster Master.
# returns the Kubernetes cluster Machine UUID
drpcli profiles show my-k8s-cluster | jq -r '.Params."krib/cluster-masters"'

26.24.8.2. Use kubectl - on Master

You can log in to the Master node as identified above, and execute kubectl commands as follows:
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get nodes

26.24.8.3. Use kubectl - from anywhere

Once the Kubernetes cluster build has been completed, you may use the kubectl command to both verify and manage the cluster. You will need to download the conf file with the appropriate tokens and information to connect to and authenticate your kubectl connections. Below is an example of doing this:
# get the Admin configuration and tokens
drpcli profiles get my-k8s-cluster param krib/cluster-admin-conf > admin.conf

# set our KUBECONFIG variable and get nodes information
export KUBECONFIG=`pwd`/admin.conf
kubectl get nodes

26.24.8.4. Advanced Stages - Helm and Sonobuoy

KRIB includes stages for advanced Kubernetes operating support.

The reference workflows already install Helm using the krib-helm stage. To leverage this utility simply define the required JSON syntax for your charts as shown in krib_helm.

Sonobuoy can be used to validate that the cluster conforms to community specification. Adding the krib-sonobuoy stage will start a test run. It can be rerun to collect the results or configured to wait for them. Storing test results in the files path requires setting the unsafe/password parameter and is undesirable for production clusters.

26.24.8.5. Ingress/Egress Traffic, Dashboard Access, Istio

The Kubernetes dashboard is enabled within a default KRIB built cluster. However no Ingress traffic rules are set up. As such, you must access services from external connections by making changes to Kubernetes, or via the Kubernetes Dashboard via Proxy.

These are all issues relating to managing, operating, and running a Kubernetes cluster, and not restrictions that are imposed by Digital Rebar Provision. Please see the appropriate Kubernetes documentation on questions regarding operating, running, and administering Kubernetes (https://kubernetes.io/docs/home/).

For Istio via Helm, please consult krib_helm for a reference install

26.24.8.6. Kubernetes Dashboard via Proxy

You can get the admin-user security token with the following command:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Now copy the token from the token part printed on-screen so you can paste it into the Enter token field of the dashboard log in screen.

Once you have obtained the admin.conf configuration file and security tokens, you may use kubectl in Proxy mode to the Master. Simply open a separate terminal/console session to dedicate to the Proxy connection, and do:
kubectl proxy

Now, in a local web browser (on the same machine you executed the Proxy command) open the following URL:

26.24.8.7. MetalLB Load Balancer

If your cluster is running on bare metal you will most likely need a LoadBalancer provider. You can easily add this to your cluster by adding the krib-metallb stage after the krib-config stage in your workflow. Currently only L2 mode is supported. You will need to set the metallb/l2-ip-range param in your profile with the range of IP’s you wish to use. This ip range must not be within the configured DHCP scope. See the MetalLB docs for more information (https://metallb.universe.tf/tutorial/layer2/).

26.24.8.8. NGINX Ingress

You can add nginx-ingress to your cluster by adding the krib-ingress-nginx stage to your workflow. This stage requires helm and tiller to be installed so should come after the krib-helm stage in your workflow.

This stage also requires a cloud provider LoadBalancer service or on bare metal you can add the krib-metallb stage before this stage in your workflow.

This stage includes support for cert-manager if your profile is properly configured. See example-cert-manager profile.

26.24.8.9. Kubernetes Dashboard via NGINX Ingress

If your workflow includes the NGINX Ingress stage the kubernetes dashboard will be accessable via https://k8s-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-k8s-db-ingress profile.

Please consult Kubernetes Dashboard via Proxy for information on getting the login token

26.24.8.10. Rook Ceph Manager Dashboard

If you install the rook via the krib-helm chart template and have krib-ingress-nginx stage in your workflow an ingress will be created so you can access the Ceph Manager Dashboard at https://rook-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-rook-db-ingress profile.

The default username is admin and you can get the generated password with the with the following command:
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode

26.24.9. Multiple Clusters

It is absolutely possible to build multiple Kubernetes KRIB clusters with this process. The only difference is each cluster should have a unique name and profile assigned to it. A given Machine may only participate in a single Kubernetes cluster type at any one time. You can install and operate both Live Boot/Immutable with install-to-disk cluster types in the same DRP Endpoint.

26.24.11. params

The content package provides the following params.

26.24.11.1. krib/dashboard-enabled

Boolean value that enables Kubernetes dashboard install

26.24.11.2. metallb/limits-cpu

This should be set to match the cpu resource limits for MetalLB

Default: 100m

26.24.11.3. certmanager/acme-challenge-dns01-provider

cert-manager DNS01 Challenge Provider Name Only route53, cloudflare, akamai, and rfc2136 are currently supported See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#supported-dns01-providers

26.24.11.4. etcd/peer-port

Allows operators to set the port for the cluster peers

26.24.11.5. krib/cluster-api-vip-port

The VIP API port to use for multi-master clusters. Each master will bind to ‘krib/cluster-api-port’ (6443 by default), but HA services for the API will be services by this port number.

Defaults to ‘8443’.

26.24.11.6. krib/cluster-bootstrap-token

Defines the bootstrap token to use. Default is ‘fedcba.fedcba9876543210’.

26.24.11.7. krib/cluster-bootstrap-ttl

How long BootStrap tokens for the cluster should live. Default is ‘24h0m0s’.

Must use a format similar to the default.

26.24.11.8. docker/daemon

Provide a custom /etc/docker/daemon.json See https://docs.docker.com/engine/reference/commandline/dockerd/ For example:

::
{“insecure-registries”:[“ci-repo.englab.juniper.net:5010”]}

26.24.11.9. etcd/server-count

Allows operators to set the number of machines required for the etcd cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

26.24.11.10. krib/cluster-service-subnet

Allows operators to specify the service subnet CIDR that will be used during the ‘kubeadm init’ process of the cluster creation.

Defaults to “10.96.0.0/12”.

26.24.11.11. kubectl/working-dir

Allows operators to change the kubectl working directory

26.24.11.12. certmanager/email

cert-manager ClusterIssuer configuration See https://cert-manager.readthedocs.io/en/latest/reference/issuers.html#issuers

26.24.11.13. certmanager/fastdns-service-consumer-domain

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

26.24.11.14. krib/packages-to-prep

List of packages to install for preparation of KRIB install process. Designed to be used in some processes where pre-prep of the packages will accelerate the KRIB install process. More specifically, in Sledgehammer Discover for Live Boot cluster install, the ‘docker’ install requires several minutes to run through selinux context changes.

Simple space separated String list.

26.24.11.15. certmanager/rfc2136-nameserver

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

26.24.11.16. certmanager/rfc2136-tsig-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

26.24.11.17. certmanager/rfc2136-tsig-key-name

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

26.24.11.18. etcd/version

Allows operators to determine the version of etcd to install Note: Changes should be coordinate with KRIB Kubernetes version

26.24.11.19. krib/operate-on-node

This Param specifies a Node in a Kubernetes cluster that should be operated on. Currently supported operations are ‘drain’ and ‘uncordon’.

The drain operation will by default maintain the contracts specified by PodDisruptionBudgets.

Options can be specified to override the default actions by use of the ‘krib/operate-options’ Param. This Param will be passed directly to the ‘kubectl’ command that has been specified by the ‘krib/operate-action’ Param setting (defaults to ‘drain’ operation if nothing specified).

The Node name must be a valid cluster member name, which by default in a KRIB built cluster; the fully qualified value of the Machine object ‘Name’ value.

26.24.11.20. certmanager/route53-access-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

26.24.11.21. etcd/client-ca-pw

Allows operators to set the CA password for the client certificate Requires Cert Plugin

26.24.11.22. etcd/servers-done

Param is set (output) by the etcd cluster building process

26.24.11.23. krib/cluster-masters

List of the machine(s) assigned as cluster master(s). If not set, the automation will elect leaders and populate the list automatically.

26.24.11.24. metallb/limits-memory

This should be set to match the memory resource limits for MetalLB

Default: 100Mi

26.24.11.25. certmanager/crds

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the cert-manager CRDs.

26.24.11.26. certmanager/fastdns-client-token

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

26.24.11.27. helm/charts

26.24.12. Install Helm Charts

Array of charts to install via Helm. The list will be followed in order.

Work is idempotent: No action is taken if charts are already installed.

Fields: chart and name are required.

Options exist to inject additional control flags into helm install instructions:

  • name: name of the chart (required)
  • chart: reference of the chart (required) - may rely on repo, path or other helm install [chart] standard
  • namespace: kubernetes namespace to use for chart (defaults to none)
  • params: map of parameters to include in the helm install (optional). Keys and values are converted to –[key] [value] in the install instruction.
  • sleep: time to wait after install (defaults to 10)
  • wait: wait for name (and namespace if provided) to be running before next action
  • prekubectl (optional) array of kubectl [request] commands to run before the helm install
  • postkubectl (optional) array of kubectl [request] commands to run after the helm install
  • targz (optional) provides a location for a tar.gz file containing charts to install. Path is relative.
  • templates (optional) map of DRP templates keyed to the desired names (must be uploaded!) to render before doing other work.
  • repos (optional) adds the requested repos to helm using helm repo add before installing helm. syntax is [repo name]: [repo path].
  • templatesbefore (optional) expands the provided template files inline before the helm install happens.
  • templatesafter (optional) expands the provided template files inline after the helm install happens

example:

[
  {
    "chart": "stable/mysql",
    "name": "mysql"
  }, {
    "chart": "istio-1.0.1/install/kubernetes/helm/istio",
    "name": "istio",
    "targz": "https://github.com/istio/istio/releases/download/1.0.1/istio-1.0.1-linux.tar.gz",
    "namespace": "istio-system",
    "params": {
      "set": "sidecarInjectorWebhook.enabled=true"
    },
    "sleep": 10,
    "wait": true,
    "kubectlbefore": ["get nodes"],
    "kubectlafter": ["get nodes"]
  }, {
    "chart": "rook-stable/rook-ceph",
    "kubectlafter": [
      "apply -f cluster.yaml"
    ],
    "name": "rook-ceph",
    "namespace": "rook-ceph-system",
    "repos": {
      "rook-stable": "https://charts.rook.io/stable"
    },
    "templatesafter": [{
      "name": "helm-rook.after.sh.tmpl"
      "nodes": "leader",
    }],
    "templatesbefore": [{
      "name": "helm-rook.before.sh.tmpl",
      "nodes": "all",
      "runIfInstalled": true
    }],
    "templates": {
      "cluster": "helm-rook.cfg.tmpl"
    },
    "wait": true
 }
]

26.24.12.1. helm/version

Allows operators to determine the version of etcd to install Note: Changes should be coordinate with KRIB Kubernetes version

26.24.12.2. krib/operate-options

This Param can be used to pass additional flag options to the ‘kubectl’ operation that is specified by the ‘krib/operate-action’ Param. By default, the ‘drain’ operation will be called if no action is defined on the Machine.

This Param provides some customization to how the operate operation functions.

For ‘kubectl drain’ documentation, see the following URL:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain
For ‘kubectl uncordin’ doc, see the URL:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#uncordon

NOTE NOTE the following flags are set as default options in the Template for drain operations:

–ignore-daemonsets –delete-local-data

For ‘drain’ operations, if you override these defaults, you MOST LIKELY need to specify them for the drain operation to be successful. You have been warned.

No defaults provdided for ‘uncordon’ operations (you shouldn’t need any).

26.24.12.3. krib/cluster-pod-subnet

Allows operators to specify the podSubnet that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.

26.24.12.4. krib/kubeadm-cfg

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the ‘kubeadm.cfg’ used during the ‘kubeadm init’ process.

The default behavior is to use the Parameterized ‘kubeadm.cfg’ from the template named ‘krib-kubeadm.cfg.tmpl’. This config file is used in the ‘krib-config.sh.tmpl’ which is the main template script that drives the ‘kubeadm’ cluster init and configuration.

26.24.12.5. certmanager/cloudflare-api-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare

26.24.12.6. etcd/servers

Param is set (output) by the etcd cluster building process

26.24.12.7. krib/apiserver-extra-args

Array of apiServerExtraArgs that you want added to the kubeadm configuration.

26.24.12.8. krib/cluster-cni-version

Allows operators to specify the version of the Kubernetes CNI utilities to install.

26.24.12.9. krib/cluster-name

Allows operators to set the Kubernetes cluster name

26.24.12.10. etcd/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the etcd cluster This parameter is REQUIRED for KRIB and etcd cluster contruction

26.24.12.11. ingress/rook-dashboard-hostname

Hostname to use for the Rook Ceph Manager Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a rook-db.$INGRESSIP.xip.io hostname will be assigned

26.24.12.12. krib/apiserver-extra-SANs

List of additional SANs (IP addresses or FQDNs) used in the certificate used for the API Server

26.24.12.13. certmanager/fastdns-client-secret

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

26.24.12.14. certmanager/rfc2136-tsig-alg

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

26.24.12.15. certmanager/route53-region

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

26.24.12.16. etcd/client-ca-name

Allows operators to set the CA name for the client certificate Requires Cert Plugin

26.24.12.17. etcd/client-port

Allows operators to set the port used by etcd clients

26.24.12.18. krib/cluster-cri-socket

This Param defines which Socket to use for the Container Runtime Interface. By default KRIB content uses Docker as the CRI, however our goal is to support multiple container CRI formats.

26.24.12.19. krib/cluster-dns

Allows operators to specify the DNS address for the cluster name resolution services.

Set by default to “10.96.0.10”.

WARNING: This IP Address must be in the same range as the
“krib/cluster-service-cidr” specified addresses.

26.24.12.20. metallb/l2-ip-range

This should be set to match the IP range you have allocated to L2 MetalLB ex: 192.168.1.240-192.168.1.250

26.24.12.21. sonobuoy/wait-mins

Default is -1 so that stages do not wait for complete.

Typical runs may take 60 minutes.

If <0 then code does wait and assumes you will run it again to retrieve the results. Task is idempotent so you can re-start a run after you have started to check on results.

26.24.12.22. ingress/longhorn-dashboard-hostname

Hostname to use for the Rancher Longhorn Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a longhorn-db.$INGRESSIP.xip.io hostname will be assigned

26.24.12.23. krib/cluster-crictl-version

Allows operators to specify the version of the Kubernetes CRICTL utility to install.

26.24.12.24. krib/networking-provider

This Param can be used to specify either ‘flannel’, ‘calico’, or ‘weave’ network providers for the Kubernetes cluster. This is completed using the provider specific YAML definition file.

The only supported providers are:

flannel calico weave

The default is ‘flannel’.

26.24.12.25. krib/cluster-master-vip

For High Availability (HA) configurations, a floating IP is required by the load balancer. This should be an available IP in the same subnet as the master nodes and not in the dhcp range. If using MetalLB the ip should not be in the configured metallb/l2-ip-range.

26.24.12.26. krib/labels

Used for adhoc node specification, labels should be set.

NOTES: * Use krib/label-env to set the env label! * Use inventory/data to set physical characteristics

26.24.12.27. provider/calico-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Calico network provider. If Calico is not installed, this Param will have no effect on the cluster.

26.24.12.28. certmanager/fastdns-access-token

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

26.24.12.29. etcd/name

Allows operators to set a name for the etcd cluster

26.24.12.30. krib/cluster-image-repository

Allows operators to specify the location to pull images from for Kubernetes. Defaults to ‘k8s.gcr.io’.

26.24.12.31. krib/cluster-master-certs

Requires Cert Plugin

26.24.12.32. provider/flannel-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Flannel network provider. If Flannel is not installed, this Param will have no effect on the cluster.

26.24.12.33. certmanager/cloudflare-email

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare

26.24.12.34. krib/cluster-domain

Defines the cluster domain for kublets to operate in by default. Default is ‘cluster.local’.

26.24.12.35. sonobuoy/binary

Downloads tgz with compiled sonobuoy executable. The full path is included so that operators can choose the correct version and archtiecture

26.24.12.36. docker/version

“Docker Version to use for Kubernetes”

26.24.12.37. etcd/peer-ca-name

Allows operators to set the CA name for the peer certificate Requires Cert Plugin

26.24.12.38. etcd/peer-ca-pw

Allows operators to set the CA password for the peer certificate If missing, will be generated Requires Cert Plugin

26.24.12.39. krib/cluster-masters-untainted

For development clusters, allows nodes to run on the same machines as the Kubernetes masters. NOTE: If you have only master nodes helm/tiller install will fail if set to false. RECOMMENDED: set to false for production clusters and have non-master nodes in the cluster

26.24.12.40. rook/data-dir-host-path

This should be set to match the desired location for Ceph storage.

Default: /mnt/hdd/rook

In future versions, this should be calculated or inferred based on the system inventory

26.24.12.41. etcd/server-ca-name

Allows operators to set the CA name for the server certificate Requires Cert Plugin

26.24.12.42. etcd/server-ca-pw

Allows operators to set the CA password for the server certificate Requires Cert Plugin

26.24.12.43. krib/cluster-kubeadm-cfg

Once the cluster initial master has completed startup, then the KRIB config task will record the bootstrap configuration used by ‘kubeadm init’. This provides a reference going forward on the cluster configurations when it was created.

26.24.12.44. krib/cluster-service-dns-domain

Allows operators to specify the Service DNS Domain that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.

By default we do not override the setting from kubeadm default behavior.

26.24.12.45. krib/ip

For configurations where kubelet does not correctly detect the IP over which nodes should communicate.

If unset, will default to the Machine Address.

26.24.12.46. docker/working-dir

Allows operators to change the Docker working directory

26.24.12.47. etcd/ip

For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from.

If unset, will default to the Machine Address.

26.24.12.48. ingress/ip-address

IP Address assigned to ingress service via LoadBalancer

26.24.12.49. krib/cluster-api-port

The API bindPort number for the cluster masters. Defaults to ‘6443’.

26.24.12.50. krib/longhorn-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Rancher Longhorn install.

26.24.12.51. krib/label-env

Used for node specification, labels should be set.

26.24.12.52. metallb/monitoring-port

This should be set to match the port you want to use for MetalLB Prometheus monitoring

Default: 7472

26.24.12.53. ingress/k8s-dashboard-hostname

Hostname to use for the kubernetes dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a {{“ingress/ip-address”}}.xip.io hostname will be assigned

26.24.12.54. krib/cluster-is-production

By default the KRIB cluster mode will be set to dev/test/lab (whatever you wanna call it). If you set this Param to true then the cluster will be tagged as in Production use.

If the cluster is in Production mode, then the state of the various Params for new clusters will be preserved, preventing the cluster from being overwritten.

If NOT in Production mode, the following Params will be wiped clean before building the cluster. This is essentially a destructive pattern.

krib/cluster-admin-conf - the admin.conf file Param will be wiped krib/cluster-join - the Join token will be destroyed

This allows for “fast reuse” patterns with building KRIB clusters, while also allowing a cluster to be marked Production and require manual intervention to wipe the Params to rebuild the cluster.

26.24.12.55. krib/cluster-join-command

Param is set (output) by the cluster building process

26.24.12.56. krib/cluster-masters-on-etcds

For development clusters, allows running etcd on the same machines as the Kubernetes masters RECOMMENDED: set to false for production clusters

26.24.12.57. krib/dashboard-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Kubernetes Dashboard.

26.24.12.58. krib/operate-action

This Param can be used to:

‘drain’, ‘delete’, ‘cordon’, or ‘uncordon’

a node in a KRIB built Kubernetes cluster. If this parameter is not defined on the Machine, the default action will be to ‘drain’ the node.

Each action can be passed custom arguments via use of the ‘krib/operate-options’ Param.

26.24.12.59. krib/cluster-admin-conf

Param is set (output) by the cluster building process

26.24.12.60. krib/cluster-kubernetes-version

Allows operators to specify the version of Kubernetes containers to pull from the ‘krib/cluster-image-repository’.

26.24.12.61. krib/cluster-master-count

Allows operators to set the number of machines required for the Kubernetes cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

26.24.12.62. krib/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the Kubernetes cluster This parameter is REQUIRED for KRIB and etcd cluster contruction

26.24.12.63. krib/metallb-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for MetalLB install.

26.24.13. stages

The content package provides the following stages.

26.24.13.1. krib-operate-uncordon

This stage runs an Uncordon operation on a given KRIB built Kubernetes node. This returns a Node back to service in a Kubernetes cluster that has previously been drained. It uses the ‘krib-operate-uncordon’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action - action to take (drain or uncordon) krib/operate-on-node - a Kubernetes node name to operate on krib/operate-options - command line arguments to pass to the

‘kubectl’ command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote uncordon on a node.

Default options are ‘’ (empty) to the uncordon operation.

26.24.13.2. krib-pkg-prep

Simple helper stage to install prereq packages prior to doing the kubernetes package installation. This just helps us get a Live Boot set of hosts (eg Sledgehammer Discovered) prepped a little faster with packages in some use cases.

26.24.13.3. krib-set-time

Helper stage to set time on the machine - DEV

26.24.13.4. krib-sonobuoy

Installs and runs Sonobuoy after a cluster has been constructed. This stage is idempotent and can be run multiple times. The purpose is to ensure that the KRIB cluster is conformant with standards

If credentials are required so that the results of the run are pushed back to DRP files.

Roadmap items: * eliminate need for DRPCLI credentials * make “am I running” detection smarter

26.24.13.5. krib-operate

This stage runs an Operation (drain|uncordon) on a given KRIB built Kubernetes node. You must specify action you want taken via the ‘krib/operate-action’ Param. If nothing specified, the default action will be to ‘drain’ the node.

In addition - you may set the following Params to alter the behavior of this stage:

krib/operate-action - action to take (drain or uncordon) krib/operate-on-node - a Kubernetes node name to operate on krib/operate-options - command line arguments to pass to the

‘kubectl’ command for the action

DRAIN NOTES: this Stage does a few things that MAY BE VERY BAD !!

  1. service pods are ignored for the drain operation
  2. –delete-local-data is used to evict pods using local storage

Default options are ‘–ignore-daemonsets –delete-local-data’ to the drain operation. If you override these values (by setting ‘krib/operate-options’) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.

These options may mean your data might be nuked.

UNCORDON NODES: typically does not require additional options

26.24.13.6. krib-contrail

Installs and runs Contrail kubectl install

CURRENTLY CENTOS ONLY see: https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/kubernetes/standalone-kubernetes-centos.md

26.24.13.7. krib-kubevirt

Installs KubeVirt.io using the chosen release from the cluster leader. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage.

Due to yaml and container downloads, this stage requires internet access.

26.24.13.8. krib-helm-init

This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. Due to helm downloads, this stage requires internet access.

This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.

26.24.13.9. krib-metallb

Installs and runs MetalLB kubectl install

see: https://metallb.netlify.com/tutorial/layer2/

26.24.13.10. krib-operate-drain

This stage runs an Drain operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-drain’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action - action to take (drain or uncordon) krib/operate-on-node - a Kubernetes node name to operate on krib/operate-options - command line arguments to pass to the

‘kubectl’ command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote draining a node.

DRAIN NOTES: this Stage does a few things that MAY BE VERY BAD !!

  1. service pods are ignored for the drain operation
  2. –delete-local-data is used to evict pods using local storage

Default options are ‘–ignore-daemonsets –delete-local-data’ to the drain operation. If you override these values (by setting ‘krib/operate-options’) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.

These options may mean your data might be nuked.

26.24.13.11. krib-helm

Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.

Due to helm downloads, this stage requires internet access.

This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.

26.24.13.12. krib-helm-charts

Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.

Due to helm downloads, this stage requires internet access.

26.24.13.13. krib-ingress-nginx

Install/config ingress-nginx and optional cert-manager Requires a cloud LoadBalancer or MetalLB to provide Service ingress.ip must run after krib-helm stage

26.24.13.14. krib-longhorn

Installs and runs Rancher Longhorn kubectl install

see: https://github.com/rancher/longhorn

26.24.13.15. krib-operate-cordon

This stage runs a Cordon operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-cordon’ Profile.

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action - action to take (cordon or uncordon) krib/operate-on-node - a Kubernetes node name to operate on krib/operate-options - command line arguments to pass to the

‘kubectl’ command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote cordon a node.

26.24.13.16. krib-operate-delete

This stage runs an Delete node operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-delete’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action - action to take krib/operate-on-node - a Kubernetes node name to operate on krib/operate-options - command line arguments to pass to the

‘kubectl’ command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote delete a node.

WARNING: THIS OPERATE DESTROYS A KUBERNETES NODE!

Presumably, you want to ‘krib-operate-drain’ the node first to remove it from the cluster and drain it’s workload to other cluster workers prior to deleting the node.

26.24.14. tasks

The content package provides the following tasks.

26.24.14.1. krib-sonobuoy

Installs Sonobuoy and runs it against the cluster on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

NOTE: Sonobuoy may take over an HOUR to complete. The task will be in process during this time.

The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.

26.24.14.2. docker-install

Installs Docker using O/S packages

26.24.14.3. etcd-config

Sets Param: etcd/servers If installing Kubernetes via Kubeadm, make sure you install a supported version! This uses the Digital Rebar Cluster pattern so etcd/cluster-profile must be set

26.24.14.4. krib-dev-reset

Clears Created Params: krib/, etcd/

26.24.14.5. krib-helm

Installs Helm and runs helm init (which installs Tiller) on the leader. Installs Charts defined in helm/charts. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if tiller is running and may skip initialization.

26.24.14.6. krib-helm-charts

Installs Charts defined in helm/charts. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set. The install checks to see if tiller is running and may skip initialization.

26.24.14.7. kubernetes-install

Downloads Kubernetes installation components from repos This task relies on the O/S packages being updated and accessible. NOTE: Access to update repos is required!

26.24.14.8. krib-config

Sets Param: krib/cluster-join, krib/cluster-admin-conf Configure Kubernetes using Kubeadm This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set

26.24.14.9. krib-helm-init

Installs Helm and runs helm init (which installs Tiller) on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if tiller is running and may skip initialization.

The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.

26.24.14.10. krib-kubevirt

Installs KubeVirt on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if KubeVirt is running and may skip initialization.

Recommend: you may want to add intel_iommu=on to the kernel-console param

The Config is provided from the kubevirt-configmap.cfg.tmpl template instead of being downloaded from github. Version updates should be reflected in the template. This approach allows for parameterization of the configuration map.

The kubectl tasks only run on the leader so it must be included in the workflow. All other machines will run virt-host-validate so it is important to run the task on all machines in the cluster.

At this time, virtctl is NOT installed on cluster

26.24.14.11. krib-pkg-prep

Installs prerequisite OS packages prior to starting KRIB install process. In some use cases this may be a faster pattern than performing the steps in the standard templates.

For example - Sledgehammer Discover nodes, add ‘krib-prep-pkgs’ stage. As machine is finishing prep - you can move to setting up other things, before kicking off the KRIB workflow.

Uses packages listed in the ‘default’ Schema section of the Param ‘krib/packages-to-prep’. You can override this list by setting the Param in a Profile or directly on the Machines to apply this to.

Packages MUST exist in the repositories on the Machines already.

26.24.14.12. krib-contrail

Installs Contrail via kubectl from the contrail.cfg template. Runs on the master only. Template replies on the cluster VIP as the master IP address

26.24.14.13. krib-ingress-nginx

Sets Param: ingress/ip-address Install/config ingress-nginx and optional cert-manager This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set