23.29. krib - Kubernetes (KRIB)

The following documentation is for Kubernetes (KRIB) (krib) content package at version v4.6.0-beta01.97+g1e33864277dc9bbb839f71f1eff994a0c4f05c23.

License: KRIB is APLv2

This document provides information on how to use the Digital Rebar KRIB content add-on. Use of this content will enable the operator to install Kubernetes in either a Live Boot (immutable infrastructure pattern) mode, or via installed to local hard disk OS mode.

KRIB uses the kubeadm cluster deployment methodology coupled with Digital Rebar enhancements to help proctor the cluster Master election and secrets management. With this content pack, you can install Kubernetes in a zero-touch manner.

KRIB does also support production, highly available (HA) deployments, with multiple masters. To enable this configuration, we’ve chosen to manage the TLS certificates and etcd installation in the Workflow instead of using the kubeadm process.

This document assumes you have your Digital Rebar Provisioning endpoint fully configured, tested, and working. We assume that you are able to properly provision Machines in your environment as a base level requirement for use of the KRIB content add-on use. See Installing KRIB (Kubernetes Rebar Integrated Bootstrapping) for step by step instructions.

23.29.1. KRIB Video References

The following videos have been produced or presented by RackN related to the Digital Rebar KRIB solution.

23.29.2. Online Requirements

KRIB uses community kubeadm for installation. That process relies on internet connectivity to download containers and other components.

23.29.3. Immutable -vs- Local Install Mode

The two primary deployment patterns that the Digital Rebar KRIB content pack supports are:

  1. Live Boot (immutable infrastructure pattern - references [1] [2])
  2. Local Install (standard install-to-disk pattern)

The Live Boot mode uses an in-memory Linux image based on the Digital Rebar Sledgehammer (CentOS based) image. After each reboot of the Machine, the node is reloaded with the in-memory live boot image. This enforces the concept of immutable infrastructure - every time a node is booted, deployed, or needs updating, simply reload the latest Live Boot image with appropriate fixes, patches, enhancements, etc.

The Local Install mode mimics the traditional “install-to-my-disk” method that most people are familiar with.

23.29.4. KRIB Basics

KRIB is a Content Pack addition to Digital Rebar Provision. It uses the Multi-Machine Cluster Pattern v4.6+ which provides atomic guarantees. This allows for Kubernetes master(s) to be dynamically elected, forcing all other nodes to wait until the kubeadm on the elected master to generate an installation token for the rest of the nodes. Once the Kubernetes master is bootstrapped, the Digital Rebar system facilitates the security token hand-off to rest of the cluster so they can join without any operator intervention.

23.29.5. Elected -vs- Specified Master

By default, the KRIB process will dynamically elect a Master for the Kubernetes cluster. This masters simply win the race-to-master election process and the rest of the cluster will coalesce around the elected master.

If you wish to specify a specific machines to be the designated masters, you can do so by setting a Param in the cluster Profile to the specific Machine that will be come the master. To do so, set the krib/cluster-masters Param to a JSON structure with the Name, UUID and IP of the machines to become masters. You may add this Param to the Profile in the below specifications, as follows:

# JSON reference to add to the Profile Params section
"krib/cluster-masters": [{"Name":"<NAME>", "Uuid":"<UUID>", "Address": "<ADDRESS>"}]

# or drpcli command line option
drpcli profiles set my-k8s-cluster param krib/cluster-master to <JSON>

The Kubernetes Master will be built on this Machine specified by the <UUID> value.

Note

This MUST be in the cluster profile because all machines in the cluster must be able to see this parameter.

23.29.6. Install KRIB

KRIB is a Content Pack and is installed in the standard method as any other Contents. We need the krib.json content pack to fully support KRIB and install the helper utility contents for stage changes.

Please review Installing KRIB (Kubernetes Rebar Integrated Bootstrapping) for step by step instructions.

23.29.6.1. CLI Install

KRIB uses the Certs plugin to build TLS, you can install that from the RackN library

::

# install required content and create the certs plugin drpcli plugin_providers upload certs from catalog:certs-tip

# verify it worked - should return true drpcli plugins show certs | jq .Available

Using the Command Line (drpcli) utility configured to your endpoint, use this process:

# Get code
drpcli contents upload catalog:krib-tip

23.29.6.2. UX Install

In the UX, follow this process:

  1. Open your DRP Endpoint: (eg. https://127.0.0.1:8092/ )
  2. Authenticate to your Endpoint
  3. Login with your `RackN Portal Login` account (upper right)
  4. Go to the left panel “Content Packages” menu
  5. Select Kubernetes (KRIB: Kubernetes Rebar Immutable Bootstrapping) from the right side panel (you may need to select Browser for more Content or use the Catalog button)
  6. Select the Transfer button for both content packs to add the content to your local Digital Rebar endpoint

23.29.7. Configuring KRIB

The basic outline for configuring KRIB follows the below steps:

  1. create a Profile to hold the Params for the KRIB configuration (you can also clone the krib-example profile)
  2. add a Param of name krib/cluster-profile to the Profile you created
  3. add a Param of name etcd/cluster-profile to the Profile you created
  4. apply the Profile to the Machines you are going to add to the KRIB cluster
  5. change the Workflow on the Machines to krib-live-cluster for memory booting or krib-install-cluster to install to Centos. You may clone these reference workflows to build custom actions.
  6. installation will start as soon as the Workflow has been set.

There are many configuration options available, review the krib/* and etcd/* parameters to learn more.

23.29.7.1. Configure with the CLI

The configuration of the Cluster includes several reference Workflow that can be used for installation. Depending on which Workflow you use, will determine if the cluster is built via install-to-local-disk or via an immutable pattern (live boot in-memory boot process). Outside of the Workflow differences, all remaining configuration elements are the same.

You must writeable create a Profile from YAML (or JSON if you prefer) with the Params stagemap and param required information. Modify the Name or other fields as appropriate - be sure you rename all subsequent fields appropriately.

echo '
---
Name: "my-k8s-cluster"
Description: "Kubernetes install-to-local-disk"
Params:
  krib/cluster-profile: "my-k8s-cluster"
  etcd/cluster-profile: "my-k8s-cluster"
Meta:
  color: "purple"
  icon: "ship"
  title: "My Installed Kubernetes Cluster"
  render: "krib"
  reset-keeps": "krib/cluster-profile,etcd/cluster-profile"
' > /tmp/krib-config.yaml

drpcli profiles create - < /tmp/krib-config.yaml

Note

The following commands should be applied to all of the Machines you wish to enroll in your KRIB cluster. Each Machine needs to be referenced by the Digital Rebar Machine UUID. This example shows how to collect the UUIDs, then you will need to assign them to the UUIDS variable. We re-use this variable throughout the below documentation within the shell function named my_machines. We also show the correct drpcli command that should be run for you by the helper function, for your reference.

Create our helper shell function my_machines

function my_machines() { for U in $UUIDS; do set -x; drpcli machines $1 $U $2; set +x; done; }

List your Machines to determine which to apply the Profile to

drpcli machines list | jq -r '.[] | "\(.Name) : \(.Uuid)"'

IF YOU WANT to make ALL Machines in your endpoint use KRIB, do:

export UUIDS=`drpcli machines list | jq -r '.[].Uuid'`

Otherwise - individually add them to the UUIDS variable, like:

export UUIDS="UUID_1 UUID_2 ... UUID_n"

Add the Profile to your machines that will be enrolled in the cluster

my_machines addprofile my-k8s-cluster

# runs example command:
# drpcli machines addprofile <UUID> my-k8s-cluster

Change stage on the Machines to initiate the Workflow transition. YOU MUST select the correct stage, dependent on your install type (Immutable/Live Boot mode or install-to-local-disk mode). For Live Boot mode, select the stage ssh-access and for the install-to-local-disk mode select the stage centos-7-install.

# for Live Boot/Immutable Kubernetes mode
my_machines workflow krib-live-cluster

# for intall-to-local-disk mode:
my_machines workflow krib-install-cluster

# runs example command:
# drpcli machines workflow <UUID> krib-live-cluster
# or
# drpcli machines workflow <UUID> krib-install-cluster

23.29.7.2. Configure with the UX

The below example outlines the process for the UX.

RackN assumes the use of CentOS 7 BootEnv during this process. However, it should theoretically work on most of the BootEnvs. We have not tested it, and your mileage will absolutely vary…

  1. create a Profile for the Kubernetes Cluster (e.g. my-k8s-cluster) or clone the krib-example profile.
  2. add a Param to that Profile: krib/cluster-profile = my-k8s-cluster
  3. add a Param to that Profile: etcd/cluster-profile = my-k8s-cluster
  4. Add the Profile (eg my-k8s-cluster) to all the machines you want in the cluster.
  5. Change workflow on all the machines to krib-install-cluster for install-to-local-disk, or to krib-live-cluster for the Live Boot/Immutable Kubernetes mode

Then wait for them to complete. You can watch the Stage transitions via the Bulk Actions panel (which requires RackN Portal authentication to view).

Note

The reason the Immutable Kubernetes/Live Boot mode does not need a reboot is because they are already running Sledgehammer and will start installing upon the stage change.

23.29.8. Operating KRIB

23.29.8.1. Who is my Master?

If you have not specified who the Kubernetes Master should be; and the master was chosen by election - you will need to determine which Machine is the cluster Master.

# returns the Kubernetes cluster Machine UUID
drpcli profiles show my-k8s-cluster | jq -r '.Params."krib/cluster-masters"'

23.29.8.2. Use kubectl - on Master

You can log in to the Master node as identified above, and execute kubectl commands as follows:

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl get nodes

23.29.8.3. Use kubectl - from anywhere

Once the Kubernetes cluster build has been completed, you may use the kubectl command to both verify and manage the cluster. You will need to download the conf file with the appropriate tokens and information to connect to and authenticate your kubectl connections. Below is an example of doing this:

# get the Admin configuration and tokens
drpcli profiles get my-k8s-cluster param krib/cluster-admin-conf > admin.conf

# set our KUBECONFIG variable and get nodes information
export KUBECONFIG=`pwd`/admin.conf
kubectl get nodes

23.29.8.4. Advanced Stages - Helm and Sonobuoy

KRIB includes stages for advanced Kubernetes operating support.

The reference workflows already install Helm using the krib-helm stage. To leverage this utility simply define the required JSON syntax for your charts as shown in the krib-helm stage in the krib - Kubernetes (KRIB) documentation.

Sonobuoy can be used to validate that the cluster conforms to community specification. Adding the krib-sonobuoy stage will start a test run. It can be rerun to collect the results or configured to wait for them. Storing test results in the files path requires setting the unsafe/password parameter and is undesirable for production clusters.

23.29.8.5. Ingress/Egress Traffic, Dashboard Access, Istio

The Kubernetes dashboard is enabled within a default KRIB built cluster. However no Ingress traffic rules are set up. As such, you must access services from external connections by making changes to Kubernetes, or via the Kubernetes Dashboard via Proxy.

These are all issues relating to managing, operating, and running a Kubernetes cluster, and not restrictions that are imposed by Digital Rebar Provision. Please see the appropriate Kubernetes documentation on questions regarding operating, running, and administering Kubernetes (https://kubernetes.io/docs/home/).

For Istio via Helm, please consult the krib-helm stage in the krib - Kubernetes (KRIB) documentation for a reference install.

23.29.8.6. Kubernetes Dashboard via Proxy

You can get the admin-user security token with the following command:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Now copy the token from the token part printed on-screen so you can paste it into the Enter token field of the dashboard log in screen.

Once you have obtained the admin.conf configuration file and security tokens, you may use kubectl in Proxy mode to the Master. Simply open a separate terminal/console session to dedicate to the Proxy connection, and do:

kubectl proxy

Now, in a local web browser (on the same machine you executed the Proxy command) open the following URL:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

23.29.8.7. MetalLB Load Balancer

If your cluster is running on bare metal you will most likely need a LoadBalancer provider. You can easily add this to your cluster by adding the krib-metallb stage after the krib-config stage in your workflow. Currently only L2 mode is supported. You will need to set the metallb/l2-ip-range param in your profile with the range of IP’s you wish to use. This ip range must not be within the configured DHCP scope. See the MetalLB docs for more information (https://metallb.universe.tf/tutorial/layer2/).

23.29.8.8. NGINX Ingress

You can add nginx-ingress to your cluster by adding the krib-ingress-nginx stage to your workflow. This stage requires helm and tiller to be installed so should come after the krib-helm stage in your workflow.

This stage also requires a cloud provider LoadBalancer service or on bare metal you can add the krib-metallb stage before this stage in your workflow.

This stage includes support for cert-manager if your profile is properly configured. See example-cert-manager profile.

23.29.8.9. Kubernetes Dashboard via NGINX Ingress

If your workflow includes the NGINX Ingress stage the kubernetes dashboard will be accessable via https://k8s-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-k8s-db-ingress profile.

Please consult Kubernetes Dashboard via Proxy for information on getting the login token

23.29.8.10. Rook Ceph Manager Dashboard

If you install the rook via the krib-helm chart template and have krib-ingress-nginx stage in your workflow an ingress will be created so you can access the Ceph Manager Dashboard at https://rook-db.LOADBALANCER_IP.xip.io. The access url and cert-manager tls can also be configured by setting the appropriate params in your profile. See example-rook-db-ingress profile.

The default username is admin and you can get the generated password with the with the following command:

kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o yaml | grep "password:" | awk '{print $2}' | base64 --decode

23.29.9. Multiple Clusters

It is absolutely possible to build multiple Kubernetes KRIB clusters with this process. The only difference is each cluster should have a unique name and profile assigned to it. A given Machine may only participate in a single Kubernetes cluster type at any one time. You can install and operate both Live Boot/Immutable with install-to-disk cluster types in the same DRP Endpoint.

23.29.11. Object Specific Documentation

23.29.11.1. params

The content package provides the following params.

23.29.11.1.1. etcd/name

Allows operators to set a name for the etcd cluster

23.29.11.1.2. krib/cert-manager-container-image-cainjector

Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.3. vault/name

Allows operators to set a name for the vault cluster

23.29.11.1.4. consul/encryption-key

Enables gossip encryption between Consul nedes

23.29.11.1.5. consul/server-ca-pw

Allows operators to set the CA password for the consul server certificate Requires Cert Plugin

23.29.11.1.6. krib/cluster-is-production

By default the KRIB cluster mode will be set to dev/test/lab (whatever you wanna call it). If you set this Param to true then the cluster will be tagged as in Production use.

If the cluster is in Production mode, then the state of the various Params for new clusters will be preserved, preventing the cluster from being overwritten.

If NOT in Production mode, the following Params will be wiped clean before building the cluster. This is essentially a destructive pattern.

krib/cluster-admin-conf - the admin.conf file Param will be wiped
krib/cluster-join       - the Join token will be destroyed

This allows for “fast reuse” patterns with building KRIB clusters, while also allowing a cluster to be marked Production and require manual intervention to wipe the Params to rebuild the cluster.

23.29.11.1.7. krib/cluster-master-count

Allows operators to set the number of machines required for the Kubernetes cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

23.29.11.1.8. krib/k3s

Informs tasks to use k3s instead of k8s No need to include etcd stages when k3s is true

23.29.11.1.9. krib/log-target-gelf

An IP outside the cluster designated configured to receive GELF (GrayLog) “The target for GELF (Graylog) “The target for GELF (Graylog) logs on UDP 2201

23.29.11.1.10. krib/selective-mastership

When this param is set to true, then in order for a machine to self-elect to a mastetr, the, krib/i-am-master param must be set for that machine. This option is useful to prevent dedicated workers from assuming mastership.

Defaults to ‘false’.

23.29.11.1.11. containerd/version

Allows operators to determine the version of containerd to install

String should NOT include v as as a prefix Used to download from https://storage.googleapis.com/cri-containerd-release/cri-containerd-${VERSION}.linux-amd64.tar.gz path

23.29.11.1.12. etcd/server-ca-pw

Allows operators to set the CA password for the server certificate Requires Cert Plugin

23.29.11.1.13. krib/operate-on-node

This Param specifies a Node in a Kubernetes cluster that should be operated on. Currently supported operations are ‘drain’ and ‘uncordon’.

The drain operation will by default maintain the contracts specified by PodDisruptionBudgets.

Options can be specified to override the default actions by use of the ‘krib/operate-options’ Param. This Param will be passed directly to the ‘kubectl’ command that has been specified by the ‘krib/operate-action’ Param setting (defaults to ‘drain’ operation if nothing specified).

The Node name must be a valid cluster member name, which by default in a KRIB built cluster; the fully qualified value of the Machine object ‘Name’ value.

23.29.11.1.14. vault/awskms-access-key

Allows operators to specify an AWS region to be used in the Vault “awskms” seal

23.29.11.1.15. krib/cluster-kubeadm-cfg

Once the cluster initial master has completed startup, then the KRIB config task will record the bootstrap configuration used by ‘kubeadm init’. This provides a reference going forward on the cluster configurations when it was created.

23.29.11.1.16. krib/cluster-service-subnet

Allows operators to specify the service subnet CIDR that will be used during the ‘kubeadm init’ process of the cluster creation.

Defaults to “10.96.0.0/12”.

23.29.11.1.17. krib/log-target-syslog

An IP outside the cluster designated configured to receive remote syslog logs on UDP 514

23.29.11.1.18. krib/packages-to-prep

List of packages to install for preparation of KRIB install process. Designed to be used in some processes where pre-prep of the packages will accelerate the KRIB install process. More specifically, in Sledgehammer Discover for Live Boot cluster install, the ‘docker’ install requires several minutes to run through selinux context changes.

Simple space separated String list.

23.29.11.1.19. metallb/l2-ip-range

This should be set to match the IP range you have allocated to L2 MetalLB ex: 192.168.1.240-192.168.1.250

23.29.11.1.20. metallb/monitoring-port

This should be set to match the port you want to use for MetalLB Prometheus monitoring

Default: 7472

23.29.11.1.21. etcd/ip

For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from.

If unset, will default to the Machine Address.

23.29.11.1.22. krib/calico-container-image-node

Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.23. consul/version

Allows operators to determine the version of consul to install

23.29.11.1.24. etcd/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the etcd cluster This parameter is REQUIRED for KRIB and etcd cluster contruction

23.29.11.1.25. etcd/controller-ip

An optional IP outside the cluster designated for a “controller” (can be the DRP host) Can be used in combination with etcd/controller-client-cert and etcd/controller/client-key to remotely backup an etcd cluster

If unset, will default to the DRP ProvisionerAddress.

23.29.11.1.26. krib/cert-manager-container-image-webhook

Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.27. krib/cluster-join-command

Param is set (output) by the cluster building process

23.29.11.1.28. krib/nginx-udp-services

Array of optional UDP services you want to expose using Nginx Ingress Controller Example might be:

9000: "default/example-go:8080"

The services defined here will be inserted in a configmap named “udp-services” in the “ingress-nginx” namespace. The ConfigMap can be updated later if you want to change/update services

See https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ for details

23.29.11.1.29. certmanager/cloudflare-api-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare

23.29.11.1.30. certmanager/rfc2136-tsig-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

23.29.11.1.31. krib/repo

Allows operators to pre-prepare a URL (i.e., a local repository) of the installation packages necessary for KRIB. If this value is set, then tasks like containerd-install and etcd-config will source their installation files from this repository, rather than attempting to download them from the internet (which may take longer, given the amount of machines to be installed plus the capacity of the internet service)

23.29.11.1.32. krib/ingress-nginx-publish-ip

If running an nginx ingress behind a NATing firewall, it may be required to explicitly specify the public IP assigned to ingresses, for example to make something like external-dns work If this value is set, then either the nginx ingress, or (if enabled) the _external_ nginx ingress will have the “–publish-status-address” argument set to this value.

23.29.11.1.33. krib/metallb-version

Set this string to the version of MetalLB to install. Defaults to “master”, but cautious users may want to set this to an established MetalLB release version

23.29.11.1.34. krib/operate-action

This Param can be used to:

'drain', 'delete', 'cordon', or 'uncordon'

a node in a KRIB built Kubernetes cluster. If this parameter is not defined on the Machine, the default action will be to ‘drain’ the node.

Each action can be passed custom arguments via use of the ‘krib/operate-options’ Param.

23.29.11.1.35. rook/ceph-public-network

This should be set to match the kubernetes “control plane” network on the nodes

23.29.11.1.36. sonobuoy/binary

Downloads tgz with compiled sonobuoy executable. The full path is included so that operators can choose the correct version and archtiecture

23.29.11.1.37. vault/version

Allows operators to determine the version of vault to install

23.29.11.1.38. etcd/peer-ca-pw

Allows operators to set the CA password for the peer certificate If missing, will be generated Requires Cert Plugin

23.29.11.1.39. krib/calico-container-image-pod2daemon-flexvol

Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.40. etcd/servers

Param is set (output) by the etcd cluster building process

23.29.11.1.41. ingress/longhorn-dashboard-hostname

Hostname to use for the Rancher Longhorn Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a longhorn-db.$INGRESSIP.xip.io hostname will be assigned

23.29.11.1.42. krib/nginx-ingress-controller-container-image

Allows operators to optionally override the container image used in the nginx-ingress-controller deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.43. krib/sign-kubelet-server-certs

When this param is set to true, then the kubelets will be configured to request their certs from the cluster CA, using CSRs. The CSR approver won’t natively sign server certs, so a custom operator, “https://github.com/kontena/kubelet-rubber-stamp”, will be deployed to sign these.

Defaults to ‘false’.

23.29.11.1.44. vault/root-token

The root token generated by initializing vault. Store this somewhere secure, and delete from DRP, for confidence

23.29.11.1.45. consul/agents-done

Param is set (output) by the consul cluster building process

23.29.11.1.46. consul/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the consul cluster This parameter is REQUIRED for KRIB and consul cluster contruction

23.29.11.1.47. rook/ceph-version

The version of rook-ceph to deploy

23.29.11.1.48. vault/seal

Vault can optionally be configured to automatically unseal, using a cloud-based KMS. Currently the only configured option is “awskms”, which necessitates you setting the following additional parameters * vault/awskms-region * vault/awskms-access-key * vault/awskms-secret-key * vault/awskms-kms-key-id

23.29.11.1.49. vault/unseal-key

The key generated by initializing vault in KMS mode. Use this to unseal vault if KMS becomes unavailable. Store this somewhere secure, and delete from DRP, for confidence

23.29.11.1.50. containerd/loglevel

Allows operators to determine the log level used for the containerd runtime (defaults to “info”)

23.29.11.1.51. krib/cluster-bootstrap-ttl

How long BootStrap tokens for the cluster should live. Default is ‘24h0m0s’.

Must use a format similar to the default.

23.29.11.1.52. krib/cluster-dns

Allows operators to specify the DNS address for the cluster name resolution services.

Set by default to “10.96.0.10”.

WARNING: This IP Address must be in the same range as the
“krib/cluster-service-cidr” specified addresses.

23.29.11.1.53. krib/cluster-kubernetes-version

Allows operators to specify the version of Kubernetes containers to pull from the ‘krib/cluster-image-repository’.

23.29.11.1.54. krib/helm-charts

Array of charts to install via Helm. The list will be followed in order.

Work is idempotent: No action is taken if charts are already installed.

Fields: chart and name are required.

Options exist to inject additional control flags into helm install instructions:

  • name: name of the chart (required)
  • chart: reference of the chart (required) - may rely on repo, path or other helm install [chart] standard
  • namespace: kubernetes namespace to use for chart (defaults to none)
  • params: map of parameters to include in the helm install (optional). Keys and values are converted to –[key] [value] in the install instruction.
  • sleep: time to wait after install (defaults to 10)
  • wait: wait for name (and namespace if provided) to be running before next action
  • prekubectl (optional) array of kubectl [request] commands to run before the helm install
  • postkubectl (optional) array of kubectl [request] commands to run after the helm install
  • targz (optional) provides a location for a tar.gz file containing charts to install. Path is relative.
  • templates (optional) map of DRP templates keyed to the desired names (must be uploaded!) to render before doing other work.
  • repos (optional) adds the requested repos to helm using helm repo add before installing helm. syntax is [repo name]: [repo path].
  • templatesbefore (optional) expands the provided template files inline before the helm install happens.
  • templatesafter (optional) expands the provided template files inline after the helm install happens

example:

[
  {
    "chart": "stable/mysql",
    "name": "mysql"
  }, {
    "chart": "istio-1.0.1/install/kubernetes/helm/istio",
    "name": "istio",
    "targz": "https://github.com/istio/istio/releases/download/1.0.1/istio-1.0.1-linux.tar.gz",
    "namespace": "istio-system",
    "params": {
      "set": "sidecarInjectorWebhook.enabled=true"
    },
    "sleep": 10,
    "wait": true,
    "kubectlbefore": ["get nodes"],
    "kubectlafter": ["get nodes"]
  }, {
    "chart": "rook-stable/rook-ceph",
    "kubectlafter": [
      "apply -f cluster.yaml"
    ],
    "name": "rook-ceph",
    "namespace": "rook-ceph-system",
    "repos": {
      "rook-stable": "https://charts.rook.io/stable"
    },
    "templatesafter": [{
      "name": "helm-rook.after.sh.tmpl"
      "nodes": "leader",
    }],
    "templatesbefore": [{
      "name": "helm-rook.before.sh.tmpl",
      "nodes": "all",
      "runIfInstalled": true
    }],
    "templates": {
      "cluster": "helm-rook.cfg.tmpl"
    },
    "wait": true
 }
]

23.29.11.1.55. krib/ingress-nginx-external-loadbalancer-ip

If set, this param will set the LoadBalancer IP for the nginx external ingress service. Used in situations where you want to specifically choose the IP assigned to the ingress, rather than having it be applied by the cloud provider (or metallb, in the bare-metal case)

23.29.11.1.56. krib/kubelet-rubber-stamp-container-image

Allows operators to optionally override the container image used in the kubelt rubber stamp deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.57. vault/servers

Param is set (output) by the vault cluster building process

23.29.11.1.58. docker/apply-http-proxy

“Apply HTTP/HTTPS proxy from to the Docker daemon”

23.29.11.1.59. krib/apiserver-extra-args

Array of apiServerExtraArgs that you want added to the kubeadm configuration.

23.29.11.1.60. krib/ingress-nginx-loadbalancer-ip

If set, this param will set the LoadBalancer IP for the nginx ingress service. Used in situations where you want to specifically choose the IP assigned to the ingress, rather than having it be applied by the cloud provider (or metallb, in the bare-metal case)

23.29.11.1.61. krib/operate-options

This Param can be used to pass additional flag options to the ‘kubectl’ operation that is specified by the ‘krib/operate-action’ Param. By default, the ‘drain’ operation will be called if no action is defined on the Machine.

This Param provides some customization to how the operate operation functions.

For ‘kubectl drain’ documentation, see the following URL:

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#drain

For ‘kubectl uncordin’ doc, see the URL:

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#uncordon

NOTE NOTE the following flags are set as default options in the Template for drain operations:

--ignore-daemonsets --delete-local-data

For ‘drain’ operations, if you override these defaults, you MOST LIKELY need to specify them for the drain operation to be successful. You have been warned.

No defaults provdided for ‘uncordon’ operations (you shouldn’t need any).

23.29.11.1.62. certmanager/acme-challenge-dns01-provider

cert-manager DNS01 Challenge Provider Name Only route53, cloudflare, akamai, and rfc2136 are currently supported See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#supported-dns01-providers

23.29.11.1.63. krib/ingress-nginx-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for nginx service install.

23.29.11.1.64. krib/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the Kubernetes cluster This parameter is REQUIRED for KRIB and etcd cluster contruction

23.29.11.1.65. krib/cluster-service-dns-domain

Allows operators to specify the Service DNS Domain that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.

By default we do not override the setting from kubeadm default behavior.

23.29.11.1.66. krib/fluent-bit-container-image

Allows operators to optionally override the container image used in the fluent-bit / logging daemonset. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.67. rook/data-dir-host-path

This should be set to match the desired location for Ceph storage.

Default: /mnt/hdd/rook

In future versions, this should be calculated or inferred based on the system inventory

23.29.11.1.68. certmanager/fastdns-service-consumer-domain

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

23.29.11.1.69. krib/cert-manager-container-image-controller

Allows operators to optionally override the container image used in the cert-manager deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.70. certmanager/route53-secret-access-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

23.29.11.1.71. docker/version

“Docker Version to use for Kubernetes”

23.29.11.1.72. krib/calico-container-image-cni

Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.73. krib/cluster-bootstrap-token

Defines the bootstrap token to use. Default is ‘fedcba.fedcba9876543210’.

23.29.11.1.74. krib/cluster-cni-version

Allows operators to specify the version of the Kubernetes CNI utilities to install.

23.29.11.1.75. krib/cluster-crictl-version

Allows operators to specify the version of the Kubernetes CRICTL utility to install.

23.29.11.1.76. certmanager/route53-access-key-id

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

23.29.11.1.77. certmanager/route53-hosted-zone-id

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

23.29.11.1.78. krib/label-env

Used for node specification, labels should be set.

23.29.11.1.79. vault/cluster-profile

Part of the Digital Rebar Cluster pattern, this parameter is used to identify the machines used in the vault cluster This parameter is REQUIRED for KRIB and vault cluster contruction

23.29.11.1.80. krib/helm-version

Allows operators to determine the version of etcd to install Note: Changes should be coordinate with KRIB Kubernetes version

23.29.11.1.81. krib/ip

For configurations where kubelet does not correctly detect the IP over which nodes should communicate.

If unset, will default to the Machine Address.

23.29.11.1.82. vault/awskms-region

Allows operators to specify an AWS region to be used in the Vault “awskms” seal

23.29.11.1.83. certmanager/crds

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the cert-manager CRDs.

23.29.11.1.84. certmanager/dns-domain

cert-manager DNS domain - the suffix appended to hostnames on certificates signed with cert-manager. Used to auto-generate the ingress for rook ceph, for example

23.29.11.1.85. krib/cluster-master-certs

Requires Cert Plugin

23.29.11.1.86. krib/container-runtime

The container runtime to be used for the KRIB cluster. This can be either docker (the default) or containerd.

23.29.11.1.87. consul/controller-ip

An optional IP outside the cluster designated for a “controller” (can be the DRP host) Can be used in combination with consul/controller-client-cert and consul/controller/client-key to remotely backup a consul cluster

If unset, will default to the DRP ProvisionerAddress.

23.29.11.1.88. consul/name

Allows operators to set a name for the consul cluster

23.29.11.1.89. krib/cluster-masters

List of the machine(s) assigned as cluster master(s). If not set, the automation will elect leaders and populate the list automatically.

23.29.11.1.90. krib/externaldns-container-image

Allows operators to optionally override the container image used in the ExteranlDNS deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.91. provider/calico-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Calico network provider. If Calico is not installed, this Param will have no effect on the cluster.

23.29.11.1.92. rook/ceph-cluster-network

This should be set to match a physical network on rook nodes to be used exclusively for cluster traffic

23.29.11.1.93. certmanager/rfc2136-nameserver

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

23.29.11.1.94. consul/servers

Param is set (output) by the consul cluster building process

23.29.11.1.95. krib/labels

Used for adhoc node specification, labels should be set.

NOTES: * Use krib/label-env to set the env label! * Use inventory/data to set physical characteristics

23.29.11.1.96. etcd/version

Allows operators to determine the version of etcd to install Note: Changes should be coordinate with KRIB Kubernetes version

23.29.11.1.97. krib/cluster-masters-untainted

For development clusters, allows nodes to run on the same machines as the Kubernetes masters. NOTE: If you have only master nodes helm/tiller install will fail if set to false. RECOMMENDED: set to false for production clusters and have non-master nodes in the cluster

23.29.11.1.98. krib/cluster-cri-socket

This Param defines which Socket to use for the Container Runtime Interface. By default KRIB content uses Docker as the CRI, however our goal is to support multiple container CRI formats. A viable alternative is /run/containerd/containerd.sock, assuming krib/container-runtime is set to “containerd”

23.29.11.1.99. vault/servers-done

Param is set (output) by the vault cluster building process

23.29.11.1.100. certmanager/default-issuer-name

The default issuer to use when creating ingresses

23.29.11.1.101. etcd/servers-done

Param is set (output) by the etcd cluster building process

23.29.11.1.102. vault/kms-plugin-token

Authorizes the vault-kms-plugin to communicate with vault on behalf of Kubernetes API

23.29.11.1.103. krib/cluster-masters-on-etcds

For development clusters, allows running etcd on the same machines as the Kubernetes masters RECOMMENDED: set to false for production clusters

23.29.11.1.104. krib/dashboard-enabled

Boolean value that enables Kubernetes dashboard install

23.29.11.1.105. etcd/controller-client-key

For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from. Will be generated for the value in etcd/controller-client-cert.

23.29.11.1.106. ingress/ip-address

IP Address assigned to ingress service via LoadBalancer

23.29.11.1.107. krib/dashboard-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Kubernetes Dashboard.

23.29.11.1.108. krib/rook-ceph-container-image-daemon-base

Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.109. sonobuoy/wait-mins

Default is -1 so that stages do not wait for complete.

Typical runs may take 60 minutes.

If <0 then code does wait and assumes you will run it again to retrieve the results. Task is idempotent so you can re-start a run after you have started to check on results.

23.29.11.1.110. certmanager/fastdns-client-secret

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

23.29.11.1.111. certmanager/rfc2136-tsig-alg

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

23.29.11.1.112. krib/nginx-external-udp-services

Array of optional UDP services you want to expose using Nginx Ingress Controller Example might be:

9000: "default/example-go:8080"

The services defined here will be inserted in a configmap named “udp-services” in the “ingress-nginx-external” namespace. The ConfigMap can be updated later if you want to change/update services

See https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ for details

23.29.11.1.113. metallb/limits-memory

This should be set to match the memory resource limits for MetalLB

Default: 100Mi

23.29.11.1.114. krib/cluster-image-repository

Allows operators to specify the location to pull images from for Kubernetes. Defaults to ‘k8s.gcr.io’.

23.29.11.1.115. krib/metallb-container-image-controller

Allows operators to optionally override the container image used in the MetalLB controller deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.116. consul/controller-client-cert

For use in configurations where the consul cluster is backed up from a machine external to the KRIB cluster. Will be generated for the value in consul/controller-ip.

23.29.11.1.117. etcd/client-ca-name

Allows operators to set the CA name for the client certificate Requires Cert Plugin

23.29.11.1.118. rook/ceph-target-disk

If using physical disks for Ceph OSDs, this value will be used as an explicit target for OSD installation. It will also be WIPED during dev-reset, so use with care. The value of the param should be only the block device, don’t prefix with /dev/ Example value might be “sda”, indicating that /dev/sda is to be used for rook ceph, and WIPED DURING RESET.

AGAIN, IF YOU SET THIS VALUE, /dev/<this value> WILL BE WIPED DURING cluster dev-reset!!

23.29.11.1.119. certmanager/cloudflare-email

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#cloudflare

23.29.11.1.120. certmanager/route53-access-key

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

23.29.11.1.121. ingress/k8s-dashboard-hostname

Hostname to use for the kubernetes dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a {{“ingress/ip-address”}}.xip.io hostname will be assigned

23.29.11.1.122. docker/daemon

Provide a custom /etc/docker/daemon.json See https://docs.docker.com/engine/reference/commandline/dockerd/ For example:

{"insecure-registries":["ci-repo.englab.juniper.net:5010"]}

23.29.11.1.123. docker/working-dir

Allows operators to change the Docker working directory

23.29.11.1.124. etcd/peer-ca-name

Allows operators to set the CA name for the peer certificate Requires Cert Plugin

23.29.11.1.125. etcd/server-ca-name

Allows operators to set the CA name for the server certificate Requires Cert Plugin

23.29.11.1.126. krib/cluster-api-vip-port

The VIP API port to use for multi-master clusters. Each master will bind to ‘krib/cluster-api-port’ (6443 by default), but HA services for the API will be services by this port number.

Defaults to ‘8443’.

23.29.11.1.127. krib/cluster-domain

Defines the cluster domain for kublets to operate in by default. Default is ‘cluster.local’.

23.29.11.1.128. krib/ingress-nginx-mandatory

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for nginx install.

23.29.11.1.129. consul/agent-count

Allows operators to set the number of machines required for the consul agents cluster. Machines will be automatically added until the number is met. NOTE: These machines will also be the vault members

23.29.11.1.130. consul/servers-done

Param is set (output) by the consul cluster building process

23.29.11.1.131. etcd/cluster-client-vip-port

The VIP client port to use for multi-master etcd clusters. Each etcd instance will bind to ‘etcd/client-port’ (2379 by default), but HA services will be serviced by this port number.

Defaults to ‘8379’.

23.29.11.1.132. krib/cluster-admin-conf

Param is set (output) by the cluster building process

23.29.11.1.133. krib/metallb-container-image-speaker

Allows operators to optionally override the container image used in the MetalLB speaker daemonset. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.134. krib/nginx-tcp-services

Array of optional TCP services you want to expose using Nginx Ingress Controller Example might be:

9000: "default/example-go:8080"

The services defined here will be inserted in a configmap named “tcp-services” in the “ingress-nginx” namespace. The ConfigMap can be updated later if you want to change/update services

See https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ for details

23.29.11.1.135. certmanager/fastdns-access-token

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

23.29.11.1.136. consul/controller-client-key

For use in configurations where the consul cluster is backed up from a machine external to the KRIB cluster. Will be generated for the value in etcd/controller-client-cert.

23.29.11.1.137. krib/calico-container-image-kube-controllers

Allows operators to optionally override the container image used in the Calico deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.138. krib/ignore-preflight-errors

Helpful for debug and test clusters. This flag allows operators to select none, all or some preflight error checks to ignore during the kubeadm init.

Use “all” to ignore all errors Use “none” to include all errors [default]

More Info, see https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

23.29.11.1.139. krib/nginx-ingress-version

Set this string to the version of MetalLB to install. Defaults to “0.25.1”. This should align with the param krib/ingress-nginx-mandatory, and is used to ensure that the container images used for the nginx deployment match the desired version.

23.29.11.1.140. vault/server-count

Allows operators to set the number of machines required for the consul cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

23.29.11.1.141. consul/server-ca-name

Allows operators to set the CA name for the server certificate Requires Cert Plugin

23.29.11.1.142. consul/server-count

Allows operators to set the number of machines required for the consul cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

23.29.11.1.143. krib/cluster-name

Allows operators to set the Kubernetes cluster name

23.29.11.1.144. krib/ingress-external-enabled

When enabled, deploys a second ingress controller (the first is default) Services are exposed to the second ingress using the class “nginx-external” instead of the “ingress” class. This can be helpful in environments where it may be helpful to expose some services only to the cluster, as an ingress, as opposed to the world.

23.29.11.1.145. krib/kubeadm-cfg

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the ‘kubeadm.cfg’ used during the ‘kubeadm init’ process.

The default behavior is to use the Parameterized ‘kubeadm.cfg’ from the template named ‘krib-kubeadm.cfg.tmpl’. This config file is used in the ‘krib-config.sh.tmpl’ which is the main template script that drives the ‘kubeadm’ cluster init and configuration.

23.29.11.1.146. krib/longhorn-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Rancher Longhorn install.

23.29.11.1.147. krib/networking-provider

This Param can be used to specify either ‘flannel’, ‘calico’, or ‘weave’ network providers for the Kubernetes cluster. This is completed using the provider specific YAML definition file.

The only supported providers are:

flannel
calico
weave

The default is ‘flannel’.

23.29.11.1.148. certmanager/manifests

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the cert-manager deployment (https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager-no-webhook.yaml is the non-validating one) https://github.com/jetstack/cert-manager/releases/download/v0.8.1/cert-manager.yaml is the validating one

23.29.11.1.149. krib/cluster-master-vip

For High Availability (HA) configurations, a floating IP is required by the load balancer. This should be an available IP in the same subnet as the master nodes and not in the dhcp range. If using MetalLB the ip should not be in the configured metallb/l2-ip-range.

23.29.11.1.150. consul/server-ca-cert

Stores consul CA cert for use in non-DRP managed hosts (like a backup host) Requires Cert Plugin

23.29.11.1.151. krib/rook-ceph-container-image-ceph

Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.152. kubectl/working-dir

Allows operators to change the kubectl working directory

23.29.11.1.153. certmanager/email

cert-manager ClusterIssuer configuration See https://cert-manager.readthedocs.io/en/latest/reference/issuers.html#issuers

23.29.11.1.154. consul/agents

Param is set (output) by the consul cluster building process

23.29.11.1.155. krib/rook-ceph-container-image

Allows operators to optionally override the container image used in the rook-ceph deployment. Possible use case would be pre-prepared images pushed to a local trusted registry

23.29.11.1.156. etcd/client-ca-pw

Allows operators to set the CA password for the client certificate Requires Cert Plugin

23.29.11.1.157. etcd/client-port

Allows operators to set the port used by etcd clients

23.29.11.1.158. certmanager/route53-region

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#amazon-route53

23.29.11.1.159. provider/flannel-config

Set this string to an HTTP or HTTPS reference for a YAML configuration to use for the Flannel network provider. If Flannel is not installed, this Param will have no effect on the cluster.

23.29.11.1.160. etcd/peer-port

Allows operators to set the port for the cluster peers

23.29.11.1.161. etcd/server-ca-cert

Stores etcd CA cert for use in non-DRP managed hosts (like a backup host) Requires Cert Plugin

23.29.11.1.162. etcd/server-count

Allows operators to set the number of machines required for the etcd cluster. Machines will be automatically added until the number is met. NOTE: should be an odd number

23.29.11.1.163. ingress/rook-dashboard-hostname

Hostname to use for the Rook Ceph Manager Dashboard. You will need to manually configure your DNS to point to the ingress ingress/ip-address.

If no hostname is provided a rook-db.$INGRESSIP.xip.io hostname will be assigned

23.29.11.1.164. metallb/l3-ip-range

This should be set to match the CIDR route you have allocated to L3 MetalLB ex: 192.168.1.0/24 (currently only a single route supported)

23.29.11.1.165. certmanager/rfc2136-tsig-key-name

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#rfc2136

23.29.11.1.166. etcd/controller-client-cert

For configurations where the etcd cluster should communicate over a network other than the one that the machine was booted from. Will be generated for the value in etcd/controller-ip.

23.29.11.1.167. vault/awskms-secret-key

Allows operators to specify an AWS region to be used in the Vault “awskms” seal

23.29.11.1.168. certmanager/fastdns-client-token

DNS01 Challenge Provider Configuration data See https://cert-manager.readthedocs.io/en/latest/reference/issuers/acme/dns01.html#akamai-fastdns

23.29.11.1.169. krib/apiserver-extra-SANs

List of additional SANs (IP addresses or FQDNs) used in the certificate used for the API Server

23.29.11.1.170. krib/i-am-master

When this param is set to true AND the krib/selective-mastership param is set to true, then this node will participate in mastership election. On the other hard, if this param is set to false (the default), but krib/selective-mastership param is set to true, then this node will never become a master. This option is useful to prevent dedicated workers from assuming mastership.

Defaults to ‘false’.

23.29.11.1.171. metallb/l3-peer-address

This should be set to match the IP of the BGP-enabled router you want MetalLB to peer with ex: 192.168.1.1 (currently only a single peer supported)

23.29.11.1.172. metallb/limits-cpu

This should be set to match the cpu resource limits for MetalLB

Default: 100m

23.29.11.1.173. vault/awskms-kms-key-id

Allows operators to specify an AWS KMS key ID to be used in the Vault “awskms” seal

23.29.11.1.174. krib/cluster-api-port

The API bindPort number for the cluster masters. Defaults to ‘6443’.

23.29.11.1.175. krib/cluster-pod-subnet

Allows operators to specify the podSubnet that will be used by CoreDNS during the ‘kubeadm init’ process of the cluster creation.

23.29.11.2. profiles

The content package provides the following profiles.

23.29.11.2.1. example-krib

Example of the absolute minimum required Params for a non-HA KRIB Kubernetes cluster.

23.29.11.2.2. example-rook-db-ingress

Example Profile for custom Rook Ceph Manager Dashboard ingress.

23.29.11.2.3. helm-reference

DO NOT USE THIS PROFILE! Copy the contents of the helm/charts param into the Cluster!

23.29.11.2.4. krib-operate-drain

This profile contians the default krib-operate task parameters to drive a drain operation. This profile can be added to a node or stage to allow the krib-operate task to do a drain operation.

This profile is used by the krib-drain stage to allow a machine to be drained without altering the parameters on the machine.

23.29.11.2.5. example-k8s-db-ingress

Example Profile for custom K8S ingress.

23.29.11.2.6. example-krib-ha

Minimum required Params to set on a KRIB Kubernetes cluster to define a Highly Available setup.

Clone this profile as krib-ha and change the VIP to your needs.

23.29.11.2.7. krib-operate-cordon

This profile contians the default krib-operate task parameters to drive a ‘cordon’ operation. This profile can be added to a node or stage to allow the krib-operate task to do a cordon operation.

This profile is used by the krib-cordon stage to allow a machine to be cordoned without changing the parameters on the machine.

23.29.11.2.8. krib-operate-delete

This profile contains the default krib-operate task parameters to drive a ‘delete’ operation. This profile can be added to a node or stage to allow the krib-operate task to do a delete operation.

This profile is used by the krib-delete stage to allow a machine to be deleted without changing the parameters on the machine.

WARNING: This pattern destroys a kubernetes node.

23.29.11.2.9. krib-operate-uncordon

This profile contians the default krib-operate task parameters to drive an uncordon operation. This profile can be added to a node or stage to allow the krib-operate task to do an uncordon operation.

This profile is used by the krib-uncordon stage to allow a machine to be uncordoned without altering the parameters on the machine.

23.29.11.3. stages

The content package provides the following stages.

23.29.11.3.1. krib-operate-uncordon

This stage runs an Uncordon operation on a given KRIB built Kubernetes node. This returns a Node back to service in a Kubernetes cluster that has previously been drained. It uses the ‘krib-operate-uncordon’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action     - action to take (drain or uncordon)
krib/operate-on-node    - a Kubernetes node name to operate on
krib/operate-options    - command line arguments to pass to the
                          'kubectl' command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote uncordon on a node.

Default options are ‘’ (empty) to the uncordon operation.

23.29.11.3.2. krib-pkg-prep

Simple helper stage to install prereq packages prior to doing the kubernetes package installation. This just helps us get a Live Boot set of hosts (eg Sledgehammer Discovered) prepped a little faster with packages in some use cases.

23.29.11.3.3. krib-runtime-install

This stage allows for the installation of multiple container runtimes. The single task (krib-runtime-install) which it executes, will launch further tasks based on the value of krib/container-runtime. Currently docker and containerd are supported, although the design is extensible.

23.29.11.3.4. krib-external-dns

Installs and runs ExternalDNS

23.29.11.3.5. krib-kubevirt

Installs KubeVirt.io using the chosen release from the cluster leader. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage.

Due to yaml and container downloads, this stage requires internet access.

23.29.11.3.6. krib-operate-cordon

This stage runs a Cordon operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-cordon’ Profile.

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action     - action to take (cordon or uncordon)
krib/operate-on-node    - a Kubernetes node name to operate on
krib/operate-options    - command line arguments to pass to the
                          'kubectl' command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote cordon a node.

23.29.11.3.7. krib-metallb

Installs and runs MetalLB kubectl install

see: https://metallb.netlify.com/tutorial/

23.29.11.3.8. krib-operate-delete

This stage runs an Delete node operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-delete’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action     - action to take
krib/operate-on-node    - a Kubernetes node name to operate on
krib/operate-options    - command line arguments to pass to the
                          'kubectl' command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote delete a node.

WARNING: THIS OPERATE DESTROYS A KUBERNETES NODE!

Presumably, you want to ‘krib-operate-drain’ the node first to remove it from the cluster and drain it’s workload to other cluster workers prior to deleting the node.

23.29.11.3.9. krib-sonobuoy

Installs and runs Sonobuoy after a cluster has been constructed. This stage is idempotent and can be run multiple times. The purpose is to ensure that the KRIB cluster is conformant with standards

If credentials are required so that the results of the run are pushed back to DRP files.

Roadmap items: * eliminate need for DRPCLI credentials * make “am I running” detection smarter

23.29.11.3.10. krib-helm-init

This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. Due to helm downloads, this stage requires internet access.

This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.

23.29.11.3.11. krib-logging

Installs and runs fluent-bit to aggregate container logs to a graylog server via GELF UDP input

23.29.11.3.12. krib-longhorn

Installs and runs Rancher Longhorn kubectl install

see: https://github.com/rancher/longhorn

23.29.11.3.13. krib-set-time

Helper stage to set time on the machine - DEV

23.29.11.3.14. krib-contrail

Installs and runs Contrail kubectl install

CURRENTLY CENTOS ONLY see: https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/kubernetes/standalone-kubernetes-centos.md

23.29.11.3.15. krib-rook-ceph

Installs and runs Rook Ceph install

23.29.11.3.16. krib-operate-drain

This stage runs an Drain operation on a given KRIB built Kubernetes node. It uses the ‘krib-operate-drain’ Profile

In addition - you may set the following Params on a Machine object to override the default behaviors of this stage:

krib/operate-action     - action to take (drain or uncordon)
krib/operate-on-node    - a Kubernetes node name to operate on
krib/operate-options    - command line arguments to pass to the
                          'kubectl' command for the action

If the ‘krib/operate-on-node’ Param is empty, the node that is currently running the Stage will be operated on. Otherwise, specifying an alternate Node allows remote draining a node.

DRAIN NOTES: this Stage does a few things that MAY BE VERY BAD !!

  1. service pods are ignored for the drain operation
  2. –delete-local-data is used to evict pods using local storage

Default options are ‘–ignore-daemonsets –delete-local-data’ to the drain operation. If you override these values (by setting ‘krib/operate-options’) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.

These options may mean your data might be nuked.

23.29.11.3.17. k3s-config

Designed to substitute for Kubernetes with K3s Installs k3s using the KRIB process and params with the goal of being able to use the same downstream stages

23.29.11.3.18. krib-ingress-nginx

Install/config ingress-nginx and optional cert-manager Requires a cloud LoadBalancer or MetalLB to provide Service ingress.ip must run after krib-helm stage

23.29.11.3.19. krib-ingress-nginx-tillerless

Install/config ingress-nginx and optional cert-manager Requires a cloud LoadBalancer or MetalLB to provide Service ingress.ip

23.29.11.3.20. krib-operate

This stage runs an Operation (drain|uncordon) on a given KRIB built Kubernetes node. You must specify action you want taken via the ‘krib/operate-action’ Param. If nothing specified, the default action will be to ‘drain’ the node.

In addition - you may set the following Params to alter the behavior of this stage:

krib/operate-action     - action to take (drain or uncordon)
krib/operate-on-node    - a Kubernetes node name to operate on
krib/operate-options    - command line arguments to pass to the
                          'kubectl' command for the action

DRAIN NOTES: this Stage does a few things that MAY BE VERY BAD !!

  1. service pods are ignored for the drain operation
  2. –delete-local-data is used to evict pods using local storage

Default options are ‘–ignore-daemonsets –delete-local-data’ to the drain operation. If you override these values (by setting ‘krib/operate-options’) you MAY NEED to re-specify these values, otherwise, the Node will NOT be drained properly.

These options may mean your data might be nuked.

UNCORDON NODES: typically does not require additional options

23.29.11.3.21. krib-helm

Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.

Due to helm downloads, this stage requires internet access.

This stage also creats a tiller service account. For advanced security, this configuration may not be desirable.

23.29.11.3.22. krib-helm-charts

Installs and runs Helm Charts after a cluster has been constructed. This stage is idempotent and can be run multiple times. This allows operators to create workflows with multiple instances of this stage. The charts to run are determined by the helm/charts parameter.

Due to helm downloads, this stage requires internet access.

23.29.11.4. tasks

The content package provides the following tasks.

23.29.11.4.1. consul-agent-config

Configures consul agents, to be used by Vault against a consul server cluster

23.29.11.4.2. krib-runtime-install

Installs a container runtime

23.29.11.4.3. krib-sonobuoy

Installs Sonobuoy and runs it against the cluster on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

NOTE: Sonobuoy may take over an HOUR to complete. The task will be in process during this time.

The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.

23.29.11.4.4. consul-server-install

Installs (but not configures) consul in server mode, to be used as an HA backend to Vault

23.29.11.4.5. krib-config

Sets Param: krib/cluster-join, krib/cluster-admin-conf Configure Kubernetes using Kubeadm This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set

23.29.11.4.6. krib-dev-hard-reset

Clears Created Params: krib/, etcd/

23.29.11.4.7. krib-helm-charts

Installs Charts defined in helm/charts. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set. The install checks to see if tiller is running and may skip initialization.

23.29.11.4.8. krib-ingress-nginx-tillerless

Sets Param: ingress/ip-address Install/config ingress-nginx and optional cert-manager This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set

23.29.11.4.9. krib-logging

Installs fluent-bit for aggregation of cluster logging to a graylog server This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if tiller is running and may skip initialization.

23.29.11.4.10. vault-kms-plugin

Configures a vault plugin for secret encryption

23.29.11.4.11. containerd-install

Installs containerd using O/S packages

23.29.11.4.12. docker-install

Installs Docker using O/S packages

23.29.11.4.13. krib-helm

Installs Helm and runs helm init (which installs Tiller) on the leader. Installs Charts defined in helm/charts. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if tiller is running and may skip initialization.

23.29.11.4.14. krib-kubevirt

Installs KubeVirt on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if KubeVirt is running and may skip initialization.

Recommend: you may want to add intel_iommu=on to the kernel-console param

The Config is provided from the kubevirt-configmap.cfg.tmpl template instead of being downloaded from github. Version updates should be reflected in the template. This approach allows for parameterization of the configuration map.

The kubectl tasks only run on the leader so it must be included in the workflow. All other machines will run virt-host-validate so it is important to run the task on all machines in the cluster.

At this time, virtctl is NOT installed on cluster

23.29.11.4.15. consul-server-config

Configures a consul server cluster, to be used as an HA backend to Vault

23.29.11.4.16. etcd-config

Sets Param: etcd/servers If installing Kubernetes via Kubeadm, make sure you install a supported version! This uses the Digital Rebar Cluster pattern so etcd/cluster-profile must be set

23.29.11.4.17. krib-contrail

Installs Contrail via kubectl from the contrail.cfg template. Runs on the master only. Template replies on the cluster VIP as the master IP address

23.29.11.4.18. kubernetes-install

Downloads Kubernetes installation components from repos This task relies on the O/S packages being updated and accessible. NOTE: Access to update repos is required!

23.29.11.4.19. vault-install

Installs (but not configures) consul, to be used as an HA backend to Vault

23.29.11.4.20. consul-agent-install

Installs (but not configures) consul in agent mode, to be used as an HA backend to Vault

23.29.11.4.21. krib-pkg-prep

Installs prerequisite OS packages prior to starting KRIB install process. In some use cases this may be a faster pattern than performing the steps in the standard templates.

For example - Sledgehammer Discover nodes, add ‘krib-prep-pkgs’ stage. As machine is finishing prep - you can move to setting up other things, before kicking off the KRIB workflow.

Uses packages listed in the ‘default’ Schema section of the Param ‘krib/packages-to-prep’. You can override this list by setting the Param in a Profile or directly on the Machines to apply this to.

Packages MUST exist in the repositories on the Machines already.

23.29.11.4.22. k3s-config

Sets Param: krib/cluster-join, krib/cluster-admin-conf Configure K3s using built-in commands This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set

Server is setup to also be an agent - all machines have workload

WARNING: Must NOT set etcd/cluster-profile when install k3s1

23.29.11.4.23. krib-dev-reset

Clears Created Params: krib/, etcd/

23.29.11.4.24. krib-helm-init

Installs Helm and runs helm init (which installs Tiller) on the leader. This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set.

The install checks to see if tiller is running and may skip initialization.

The tasks only run on the leader so it must be included in the workflow. All other machines will be skipped so it is acceptable to run the task on all machines in the cluster.

23.29.11.4.25. krib-ingress-nginx

Sets Param: ingress/ip-address Install/config ingress-nginx and optional cert-manager This uses the Digital Rebar Cluster pattern so krib/cluster-profile must be set

23.29.11.4.26. vault-config

Configures a vault backend (using consul for storage) for secret encryption