23.42. task-library - Core Task Library

The following documentation is for Core Task Library (task-library) content package at version v4.6.0-beta01.97+g1e33864277dc9bbb839f71f1eff994a0c4f05c23.

23.42.1. RackN Task Library

This content package is a collection of useful stages and operations for Digital Rebar. It also includes several handy workflows like the CentOS base, Fedora base, and Ubuntu base workflows. You can also find many handy stages in this like the network-lldp stage which can be added to your discovery to add additional networking information discovered by lldp.

23.42.1.1. Cluster Stages

These stages implement the Multi-Machine Cluster Pattern v4.6+.

Allows operators to orchestrate machines into sequential or parallel operation.

23.42.1.2. Inventory Stage

Convert gohai and other JSON data into a map that can be used for analysis, classification or filtering.

23.42.2. Object Specific Documentation

23.42.2.1. params

The content package provides the following params.

23.42.2.1.1. inventory/collect

Map of commands to run to collect Inventory Input Each group includes the fields with jq maps to store. For example; adding drpcli gohai will use gohai JSON as input. Then jq will be run with provided values to collect inventory into a inventory/data as a simple map.

To work correctly, Commands should emit JSON.

Special Options:

  • Change the command to parse JSON from other sources
  • Add JQargs to give hints into jq arguments before the parse string

Gohai example:

{
  gohai: {
    Command: "drpcli gohai",
    JQargs: ""
    Fields: {
      RAM: ".System.Memory.Total / 1048576 | floor",
      NICs: ".Networking.Interfaces | length"
    }
  }
}

23.42.2.1.2. rsa/key-private

Private SSH Key (secure)

No default is set.

To preserve formatting, | is used instead of \n.

When writing the key inside a consuming task, you should use the following reference code:

tee id_rsa >/dev/null << EOF
{{.Param "rsa/key-private" | replace "|" "\n" }}
EOF
chmod 600 id_rsa

23.42.2.1.3. rsa/key-user

SSH Key User.

23.42.2.1.4. inventory/NICs

From inventory/data, used for indexable data.

23.42.2.1.5. inventory/CPUSpeed

From inventory/data, used for indexable data.

23.42.2.1.6. terraform/tfstate

JSON stored value of the terraform.tfstate file.

Copied from the machine before terraform-apply task. Saved back to machine after terraform-apply task.

If missing, Terraform is run without a state file!

23.42.2.1.7. ansible/playbook-templates

This is an array of strings where each string is a template that renders an Ansible Playbook. They are run in sequence to allow building inventory dynamically during a run.

Output from a playbook can be passed to the next one in the list by setting the ansible/output value. This value gets passed as a variable into the next playbook as part of the params on the machine object json.

23.42.2.1.8. inventory/NICMac

From inventory/data, used for indexable data.

23.42.2.1.9. inventory/NICSpeed

From inventory/data, used for indexable data.

This reports the speed as a number and the duplex state as true/false.

23.42.2.1.10. inventory/RaidDisks

From inventory/data, used for indexable data.

23.42.2.1.11. inventory/flatten

Creates each inventory value as a top level Params This is needed if you want to filter machines in the API by inventory data. For example, using Terraform filters.

This behavior is very helpful for downstream users of the inventory params because it allows them to be individually retried and searched.

Note

This will create LOTS of Params on machines. We recommend that you define Params to match fields instead of relying on adhoc Params.

23.42.2.1.12. terraform/plan-automation

Like terraform/plan-templates, but to be added by other tasks instead of operators.

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Developer Notes: When used on a Machine this Param is expected to be temporary and will be automatically deleted during destroy action phase. This allows automation to set a default plan if none is present, but operators can override the plan by setting at the profile or global level.

Outputs from a plan will can be automatically saved on the Machine.

23.42.2.1.13. inventory/RaidDiskSizes

From inventory/data, used for indexable data.

23.42.2.1.14. terraform/plan-templates

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Outputs from a plan will can be automatically saved on the Machine.

23.42.2.1.15. ansible/output

Generic object that is constructed by the ansible-apply task if the playbook creates a file called [Machine.Name].json

For example, the following task will create the correct output:

- name: output from playbook
  local_action:
    module: copy
    content: "{{`{{ myvar }}`}}"
    dest: "{{ .Machine.Name }}.json"

23.42.2.1.16. cluster/leader

Added during cluster-setup to flag a cluster machines as leader.

Cluster Leader = machines in cluster defined as leaders Cluster Manager = DRP contexts in cluster that coordinate activity

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.1.17. cluster/leaders

====================== DEPRECATED in v4.6 ======================

23.42.2.1.18. inventory/NICInfo

From inventory/data, used for indexable data.

23.42.2.1.19. inventory/integrity

Allows operators to compare new inventory/data to stored inventory/data on the machine. If true and values not match (after first run) then Stage will fail.

23.42.2.1.20. storage/mount-devices

23.42.2.1.21. Mount Attached Storage

Ordered list of of devices to attempt mounting from the OS.

storage/mount-devices task will attempt to mount all the drives in the list in in order. If the desired mount point is already in use then the code will skip attempting to assign it.

This design allows operators to specific multiple mount points or have a single point with multiple potential configurations.

  • rebuilt will wide and rebuild the mount
  • reset will rm -rf all files if UUID changes

example:

[
  {
    disk: "/dev/mmcblk0",
    partition: "/dev/mmcblkp1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "example"
  }
  {
    disk: "/dev/sda",
    partition: "/dev/sda1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "put something here"
  }
]

23.42.2.1.22. inventory/CPUs

From inventory/data, used for indexable data.

23.42.2.1.23. inventory/Manufacturer

From inventory/data, used for indexable data.

23.42.2.1.24. inventory/RaidControllers

From inventory/data, used for indexable data.

23.42.2.1.25. inventory/RaidTotalDisks

From inventory/data, used for indexable data.

23.42.2.1.26. inventory/check

Using BASH REGEX, define a list of inventory data fields to test using Regular Expressions. Fields are tested in sequence, the first to fail will halt the script.

23.42.2.1.27. terraform-var/machine_ip

Storage the Machine IP generated when Terraform output includes machine_ip The terraform-apply logic will also set this as Machine.Address

This param is used to determine if the IP should be removed when terrafor destroy is called. If it exists, the Machine.Address will be cleared.

23.42.2.1.28. dr-server/initial-user

Defaults to rocketskates

23.42.2.1.29. dr-server/install-drpid

If not set, will use “site-[Machine.Name]”

23.42.2.1.30. inventory/RaidDiskStatuses

From inventory/data, used for indexable data.

23.42.2.1.31. inventory/SerialNumber

From inventory/data, used for indexable data.

23.42.2.1.32. inventory/tpm-fail-on-notools

If set to true, the system fail if the TPM tools are not present.

Defaults to false.

23.42.2.1.33. reset-workflow

Workflow to set before rebooting system.

23.42.2.1.34. terraform/plan-action

Verb used with Terraform activity to apply or destroy plans. If additional command params are required, append them to this param.

Defaults to apply.

23.42.2.1.35. cluster/escape

====================== DEPRECATED in v4.6 ======================

23.42.2.1.36. inventory/DIMMs

From inventory/data, used for indexable data.

23.42.2.1.37. rsa/key-public

Public SSH Key.

No default is set.

23.42.2.1.38. inventory/NICDescr

From inventory/data, used for indexable data.

23.42.2.1.39. inventory/ProductName

From inventory/data, used for indexable data.

23.42.2.1.40. inventory/TpmPublicKey

From inventory/data, used for indexable data.

This is the base64 encoded public key from the TPM.

If an error occurs, it will contain the error.

  • no device - no TPM detected
  • no tools - no tools were available to install

23.42.2.1.41. cluster/filter

Filter used by Cluster Manager to wait until all machines have completed workflows.

Since the filter is expanded in the template, operators can also use BASH variables

Default is Profiles Eq $CLUSTER_PROFILE

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.1.42. cluster/profile

Name of the profile used by the machines in the cluster for shared information. Typically, this is simply self-referential (it contains the name of the containing profile) to allow machines to know the shared profile.

Note: the default value will be removed in v4.6. In versions 4.3-4.5, the default value was added to make it easier for operators to use clusters without having to understand need to create a cluster/profile value. While helpful, it leads to nonobvious cluster errors that are difficult to troubleshoot.

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.1.43. dr-server/initial-password

Defaults to r0cketsk8ts

23.42.2.1.44. inventory/CPUCores

From inventory/data, used for indexable data.

23.42.2.1.45. inventory/DIMMSizes

From inventory/data, used for indexable data.

23.42.2.1.46. inventory/data

Stores the data collected by the fieds set in inventory/collect. If inventory/integrity is set to true, then used for comparision data.

23.42.2.1.47. inventory/tpm-device

The device to use to query the TPM.

Defaults to /dev/tpm0

23.42.2.1.48. ansible/playbooks

Used by Task ‘ansible-playbooks-local’ task.

Runs the provided playbooks in order from the json array. The array contains structured objects with further details about the ansible-playbook action.

Each playbook MUST be stored in either 1. a git location accessible from the machine running the task. 1. a DRP template accessible to the running DRP

The following properties are included in each array entry:

  • playbook (required): name of the playbooks passed into ansible-playbook cli
  • name (required): determines the target of the git clone
  • either (required, mutually exclusive): * repo: path related to machine where the playbook can be git cloned from * template: name of a DRP template containing a localhost ansible playbook
  • path (optional): if playbooks are nested into a single repo, move into that playbook
  • commit (optional): commit used to checkout a specific commit tag from the git history
  • data (optionalm boolean): if true, use items provided in args
  • args (optional): additional arguments to be passed into the ansible-playbook cli
  • verbosity (optional, boolean): if false, suppresses output from ansiblie using no_log.
  • extravars (optional, string): name of a template to expand as part of the extra-vars. See below.

For example

[
  {
    "playbook": "become",
    "name": "become",
    "repo": "https://github.com/ansible/test-playbooks",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  },
  {
    "playbook": "test",
    "name": "test",
    "template": "ansible-playbooks-test-playbook.yaml.tmpl",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  }
]

Using extravars allows pulling data from Digital Rebar template expansion into github based playbooks. These will show up as top level variables. For example:

foo: "{{ .Param "myvar" }}",
bar: "{{ .Param "othervar" }"}

23.42.2.1.49. cluster/step

====================== DEPRECATED in v4.6 ======================

23.42.2.1.50. inventory/RAM

From inventory/data, used for indexable data.

23.42.2.1.51. inventory/Family

From inventory/data, used for indexable data.

23.42.2.1.52. ansible-inventory

Holds the value from the ansible-inventory task.

23.42.2.1.53. cluster/leader-count

Used by cluster-initialize add cluster/leader Param to cluster leaders

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.1.54. cluster/machines

====================== DEPRECATED in v4.6 ======================

23.42.2.1.55. cluster/manager

Added during cluster-setup to flag a cluster machines as manager.

Cluster Leader = machines in cluster defined as leaders Cluster Manager = DRP contexts in cluster that coordinate activity

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.1.56. context/name

Used by context-set as the target for the Param.BaseContext value.

Defaults to “” (Machine context)

23.42.2.1.57. drive-signatures

A map of drive to SHA1 signatures.

23.42.2.1.58. inventory/CPUType

From inventory/data, used for indexable data.

23.42.2.1.59. network/firewall-ports

Map of ports to open for Firewalld including the /tcp or /udp filter.

Skip: An array to empty [] to skip this task.

23.42.2.1.60. inventory/Hypervisor

From inventory/data, used for indexable data.

23.42.2.2. profiles

The content package provides the following profiles.

23.42.2.2.1. bootstrap-contexts

Bootstrap Digital Rebar server for advanced operation

  • Context Operations * Installs Docker and downloads context containers * Locks the endpoint to prevent accidential operations

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

23.42.2.2.2. bootstrap-elasticsearch

Profile to bootstrap elasticsearch

23.42.2.2.3. bootstrap-kibana

Profile to bootstrap kibana

23.42.2.3. stages

The content package provides the following stages.

23.42.2.3.1. cluster-remove

DEPRECATED in v4.6

23.42.2.3.2. inventory

Collects selected fields from Gohai into a simpler flat list.

The process uses JQ filters, defined in inventory/fields, to build inventory/data on each machine.

Also, applies the inventory/check map to the data and will fail if the checks do not pass.

23.42.2.3.3. bootstrap-advanced

DEPRECATED: Use unversal-bootstrap with the bootstrap-contexts profile Bootstrap stage to build out an advanced setup

This augments the bootstrap-base. It does NOT replace it.

Bootstrap Operations: * Install Docker & Building Contexts if defined * Lock the Machine

23.42.2.3.4. drive-signature

Builds a signature for each drive and stores that on the machine.

23.42.2.3.5. ansible-playbooks-local

Invoke ansible playbooks from git.

Note

ansible/playbooks is a Required Param - List of playbook git repos to run on the local machine.

23.42.2.3.6. cluster-setup

DEPRECATED in v4.6 Initialize Cluster using Digital Rebar Patterns

  1. will verify machine is in a shared profile
  2. will create shared profile if missing
  3. will add profile to machine if missing
  4. will make machine leader if no leader is present

23.42.2.3.7. cluster-step

DEPRECATED in v4.6

23.42.2.3.8. cluster-add

DEPRECATED in v4.6

23.42.2.3.9. cluster-sync

DEPRECATED in v4.6

23.42.2.3.10. drive-signature-verify

Verifies signatures for drives.

23.42.2.3.11. ansible-inventory

Collects ansible inventory data from ansible’s setup module. .. note:: This will attempt to install ansible if it is not already installed.

23.42.2.4. tasks

The content package provides the following tasks.

23.42.2.4.1. ansible-join-up

Runs an embedded Ansible Playbook to run the DRP join-up process.

Requires an ansible context.

Expects to be in a Workflow that allows the joined machine to continue Discovery and configuration steps as needed.

Expects to have rsa-key-create run before stage is called and the public key MUST be on the target machine.

Idempotent - checks to see if service is installed and will not re-run join-up.

23.42.2.4.2. cluster-step

DEPRECATED IN v4.6

Waits until machine is in Nth (or earlier) position.

Replies on cluster-remove to advance queue.

Typical cluster workflow is add -> step -> remove -> sync.

Where step or sync are optional. See cluster-add task for more details.

23.42.2.4.3. stage-chooser

This task uses the stage-chooser/next-stage and stage-chooser/reboot parameters to change the stage of the current machine and possibly reboot.

This is not intended for use in a stage chain or workflow. But a transient stage, that can be used on a machine that is idle with a runner executing.

23.42.2.4.4. drive-signature

Generate a signature for each drive and record them on them machine.

23.42.2.4.5. elasticsearch-setup

A task to install and setup elasticsearch. This is a very simple single instance.

23.42.2.4.6. ansible-apply

Runs one or more Ansible Playbook templates as defined by the ansible/playbook-templates variable in the stage calling the task.

Requires an ansible context.

Expects to have rsa-key-create run before stage is called so that rsa/* params exist.

Information can be chained together by having the playbook write [Machine.Uuid].json as a file. This will be saved on the machine as Param.ansible/output. The entire Machine JSON is passed into the playbook as the digitalrebar variable so it is available.

23.42.2.4.7. ansible-inventory

Install ansible, if needed, and record the setup module ansible variables onto the machine as parameter named ansible-inventory.

23.42.2.4.8. ansible-playbooks-local

A task to invoke a specific set of ansible playbooks pulled from git.

Sequence of operations (loops over all entries: 1. collect args if provided 1. git clone repo to name 1. git checkout commit if provided 1. cd to name & path 1. run ansible-playbook playbook and args if provided 1. remove the directories

Note

Requires Param ansible/playbooks - List of playbook git repos to run on the local machine.

23.42.2.4.9. cluster-remove

DEPRECATED in v4.6 Removes the Machine Uuid into the cluster profile.

Typical cluster workflow is add -> step -> remove -> sync

Where step or sync are optional.

See cluster-add task for more details.

23.42.2.4.10. inventory-check

Using the inventory/collect parameter, filters the JSON command output into inventory/data hashmap for rendering and validation.

Will apply inventory/check for field valation.

23.42.2.4.11. terraform-apply

Runs one or more Terraform Plan templates as defined by the terraform/plan-templates variable in the stage calling the task.

Requires an terraform context.

The terraform apply is only called once. All plans in the list are generated first. If sequential operations beyond the plan are needed, use multiple calls to this task.

Information can be chained together by having the plan output saved on the machine as Param.terraform-var/[output var].

Only Name, UUID, Address of the Machine are automatically passed into the plan; however, the plans can use the .Param and .ParamExists template to pull any value needed.

Terraform State is stored as a Param ‘terraform/tfstate’ the Machine after first execution. It is then retrieved for all subsequent runs so that Terraform is able to correctly use it’s state values.

For developers: use the terraform/plan-automation parameter if you needed have a task that uses terraform behind the scenes. This parameter is automatically removed from machines (not profiles!) automatically on destroy.

Notes: * having SSH keys requires using the ‘rsa-key-create’ generator task. * if creating cloud machines, use the ‘ansible-join-up’ task for join

23.42.2.4.12. workflow-pause

This task will set the machine to NOT runnable, but leave the runner running. This will act as a pause in the workflow for other system to restart.

23.42.2.4.13. context-set

This task allows stages to change the BaseContext for a machine as part of a larger workflow.

This is especially helpful creating a new machine using an endpoint context, such as Terraform to create a machine, and then allowing the machine to transition to a runner when it starts.

To code is written to allow both clearing BaseContext (set to “”) or setting to a known value. If setting to a value, code ensures that the Context exits.

23.42.2.4.14. drive-signature-verify

Using the signatures on the machine, validate each drive’s signature matches.

23.42.2.4.15. network-lldp

Assumes Sledgehammer has LLDP daemon installed so that we can collect data.

Also requires the system to have LLDP enabled on neighbors including switches to send the correct broadcast packets.

23.42.2.4.16. cluster-add

DEPRECATED in v4.6

Injects the Machine Uuid into the cluster/machines param. This is the first step in a the cluster workflow. Typical cluster workflow is add -> step -> remove -> sync where step or sync are optional.

The cluster-* stages provide basic queue management for machines with a shared profile. That profile must have a cluster/profile field that names the profile. Once the queue is running, machines are tracked in the cluster/machines parameter.

  • cluster-add enqueues the machine
  • cluster-step services the queue (takes the next cluster/step in line)
  • cluster-remove dequeues the machine (should be done AFTER servicing)
  • cluster-sync forces all machines to wait until the queue is empty

If you are doing a rolling upgrade, use the following sequence:

  • cluster-add -> cluster-step -> [do work] -> cluster-remove

If you are coalescing work over multiple machines, use the following sequence:

  • cluster-add -> [do work] -> cluster-remove -> cluster-sync

The cluster-add process will automatically elect cluster/leader-count leaders during the enqueue step. These cluster/leaders will be be maintained after the run is over and require manual editing to remove. It is possible to pre-create the leasder list.

It is possible to do multiple add/remove steps in a single workflow

If your cluster is NOT behaving, you can rescue or escape from the cluster stages by setting cluster/escape on the profile. This will interrupt a cluster in process. Using escape=0 will allow the cluster stages to exit normally and noop while using escape=1 will exit with an error and the stages will stop progressing. Escape=1 is safter since downstream actions are blocked.

23.42.2.4.17. cluster-initialize

For the v4.6 Cluster Pattern

Creates the Cluster for Digital Rebar

1. will create shared profile if missing (named after cluster manager machine) 1. identified machine running this task as the cluster manager 1. will add cluster machines to the shared profile based on cluster/filter 1. will make machine leader(s) if no leader is present

Note: your cluster/filter must apply to >0 machines!

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

23.42.2.4.18. cluster-setup

DEPRECATED in v4.6

Creates the Cluster for Digital Rebar

  1. will verify machine is in a shared profile
  2. will create shared profile if missing
  3. will add profile to machine if missing
  4. will make machine leader if no leader is present

23.42.2.4.19. rsa-key-create

Uses ssh-keygen to create an RSA public/private key pair.

Stores keys in rsa/key-public and rsa/key-private on the machine.

The public key (which contains newlines) is stored in format where newlines are converted to |.

Noop if the rsa/key-private exists.

23.42.2.4.20. bootstrap-contexts

Multi-Step Docker-Context Boostrap.

This task is idempotent.

IMPORANT NOTES: * Stage installs Podman and the Docker-Context Plugin even if you do not have Docker-Contexts defined * Contexts are a licensed feature (the stage will abort if no license is exists)

1. Installs Podman/Docker on DRP Host 1. Installs the Docker-Context from Catalog 1. Creates the runner context if missing 1. Attempts to build & upload images for all Docker-Contexts 1. Initializes Podman/Docker for all Contexts

Podman/Docker build files are discovered in the following precidence order: 1. using Container Buildfile from files/contexts/docker-build/[contextname] 1. using URL from Meta.Imagepull 1. using Container Buildfile downloaded Context Meta.Dockerfile 1. using Docker Image from Context Meta.Dockerpull Once downloaded and built, they are uploaded to the correct files location

23.42.2.4.21. cluster-sync

DEPRECATED in v4.6

Waits until no machines remain. Use with cluster-remove.

Typical cluster workflow is add -> step -> remove -> sync

Where step or sync are optional.

See cluster-add task for more details.

23.42.2.4.22. dr-server-install

Installs DRP Server using the Ansibile Context. Sets DRP-ID to Machine.Name

NOTE: All connections from FROM THE MANAGER, the provisioned site does NOT need to connect to the manager.

LIMITATIONS: * firewall features only available for Centos family

The primary use cases for this task are

1. Creating a remote site for Multi-Site-Manager 1. Building Development installations for rapid testing

Requires install.sh and zip artifacts to be bootstrap/*

Will transfer the DRP license to the machine being created.

If DRP is already installed, will restart DRP and update license

For operators,this feature makes it easy to create new edge sites using DRP Manager.

23.42.2.4.23. kibana-setup

A task to install and setup kibana. This is a very simple single instance.

23.42.2.4.24. network-firewall

Requires that firewall-cmd or ufs to be enabled system.

Will reset the firewall at the end of the task.

23.42.2.4.25. storage-mount-devices

Uses array of devices from storage/mount-devices to attach storage to system.

If we’ve need a storage area, this task will mount the requested resources under /mnt/storage.

See Mount Attached Storage

23.42.2.5. workflows

The content package provides the following workflows.

23.42.2.5.1. ubuntu-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-18.04 ISO as per the ubuntu-18.04 BootEnv

23.42.2.5.2. bootstrap-advanced

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations. Use universal-bootstrap with bootstrap-contexts profile

Bootstrap Digital Rebar server for advanced operation Includes the bootstrap-base!

REQUIRES that the Endpoint Agent has been enabled.

  • Basic Operations * Make sure Sledgehammer bootenvs are loaded for operation. * Set the basic default preferences. * Setup an ssh key pair and install it to the global profile. * Locks the endpoint to prevent accidential operations
  • Advanced Operations * Installs Docker and downloads context containers

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

23.42.2.5.3. centos-7-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS 7 ISO as per the centos-7 BootEnv

23.42.2.5.4. centos-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS ISO as per the centos-8 BootEnv

23.42.2.5.5. discover-joinup

Discover including firewalld, run as a service and complete-nobootenv

This workflow is recommended for joining cloud machines instead of discover-base.

NOTE: You must set defaultBootEnv to sledgehammer in order to use join-up to discover machines.

Some operators may chooose to first create placeholder machines and then link with join-up.sh to the placeholder machine model using the UUID. See the ansible-joinup task for an example.

These added stages help for existing machines that may already have basic configuration (like firewalld) and a stable operating system (runner-service allows reboots).

Complete-nobootenv ensures that Digital Rebar does not force the workflow into a bootenv (potentially rebooting) when finished.

23.42.2.5.6. fedora-33-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora 33 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-33 BootEnv

23.42.2.5.7. fedora-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-31 BootEnv

23.42.2.5.8. ubuntu-20.04-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-20.04 ISO as per the ubuntu-20.04 BootEnv