21.43. task-library - Core Task Library

The following documentation is for Core Task Library (task-library) content package at version v4.8.0-alpha00.25+g8e224a5bec085c80968ec27e0332c339a65d0659.

21.43.1. RackN Task Library

This content package is a collection of useful stages and operations for Digital Rebar. It also includes several handy workflows like the CentOS base, Fedora base, and Ubuntu base workflows. You can also find many handy stages in this like the network-lldp stage which can be added to your discovery to add additional networking information discovered by lldp.

21.43.1.1. Cluster Stages

These stages implement the Multi-Machine Cluster Pattern v4.6+.

Allows operators to orchestrate machines into sequential or parallel operation.

21.43.1.2. Inventory Stage

Convert gohai and other JSON data into a map that can be used for analysis, classification or filtering.

21.43.2. Object Specific Documentation

21.43.2.1. workflows

The content package provides the following workflows.

21.43.2.1.1. centos-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS ISO as per the centos-8 BootEnv

21.43.2.1.2. fedora-33-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora 33 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-33 BootEnv

21.43.2.1.3. fedora-34-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora 34 provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-34 BootEnv

21.43.2.1.4. ubuntu-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-18.04 ISO as per the ubuntu-18.04 BootEnv

21.43.2.1.5. bootstrap-advanced

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations. Use universal-bootstrap with bootstrap-contexts profile

Bootstrap Digital Rebar server for advanced operation Includes the bootstrap-base!

REQUIRES that the Endpoint Agent has been enabled.

  • Basic Operations * Make sure Sledgehammer bootenvs are loaded for operation. * Set the basic default preferences. * Setup an ssh key pair and install it to the global profile. * Locks the endpoint to prevent accidential operations
  • Advanced Operations * Installs Docker and downloads context containers

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

21.43.2.1.6. centos-7-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in CentOS provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the CentOS 7 ISO as per the centos-7 BootEnv

21.43.2.1.7. discover-joinup

Discover including firewalld, run as a service and complete-nobootenv

This workflow is recommended for joining cloud machines instead of discover-base.

NOTE: You must set defaultBootEnv to sledgehammer in order to use join-up to discover machines.

Some operators may chooose to first create placeholder machines and then link with join-up.sh to the placeholder machine model using the UUID. See the ansible-joinup task for an example.

These added stages help for existing machines that may already have basic configuration (like firewalld) and a stable operating system (runner-service allows reboots).

Complete-nobootenv ensures that Digital Rebar does not force the workflow into a bootenv (potentially rebooting) when finished.

21.43.2.1.8. fedora-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows and Deprecations.

This workflow includes the DRP Runner in Fedora provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Fedora ISO as per the fedora-31 BootEnv

21.43.2.1.9. ubuntu-20.04-base

Warning

DEPRECATED - This workflow will be removed from future versions of DRP. Please use the universal content pack and workflows. See kb-00061: Deploying Linux with Universal Workflows.

This workflow includes the DRP Runner in Ubuntu provisioning process for DRP.

After the install completes, the workflow installs the runner in a waiting state so that DRP will automatically detect and start a new workflow if the Machine.Workflow is updated.

Note

To enable, upload the Ubuntu-20.04 ISO as per the ubuntu-20.04 BootEnv

21.43.2.2. params

The content package provides the following params.

21.43.2.2.1. inventory/NICInfo

From inventory/data, used for indexable data.

21.43.2.2.2. network/firewall-ports

Map of ports to open for Firewalld including the /tcp or /udp filter.

Skip: An array to empty [] to skip this task.

21.43.2.2.3. inventory/NICs

From inventory/data, used for indexable data.

21.43.2.2.4. inventory/RAM

From inventory/data, used for indexable data.

21.43.2.2.5. ansible/playbook-templates

This is an array of strings where each string is a template that renders an Ansible Playbook. They are run in sequence to allow building inventory dynamically during a run.

Output from a playbook can be passed to the next one in the list by setting the ansible/output value. This value gets passed as a variable into the next playbook as part of the params on the machine object json.

21.43.2.2.6. inventory/Manufacturer

From inventory/data, used for indexable data.

21.43.2.2.7. inventory/Hypervisor

From inventory/data, used for indexable data.

21.43.2.2.8. inventory/RaidDisks

From inventory/data, used for indexable data.

21.43.2.2.9. reset-workflow

Workflow to set before rebooting system.

21.43.2.2.10. terraform-var/machine_ip

Storage the Machine IP generated when Terraform output includes machine_ip The terraform-apply logic will also set this as Machine.Address

This param is used to determine if the IP should be removed when terrafor destroy is called. If it exists, the Machine.Address will be cleared.

21.43.2.2.11. cluster/filter

Filter used by Cluster Manager to wait until all machines have completed workflows.

Since the filter is expanded in the template, operators can also use BASH variables

Default is Profiles Eq $CLUSTER_PROFILE

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.2.12. cluster/leaders

====================== DEPRECATED in v4.6 ======================

21.43.2.2.13. rsa/key-user

SSH Key User.

21.43.2.2.14. storage/mount-devices

21.43.2.2.15. Mount Attached Storage

Ordered list of of devices to attempt mounting from the OS.

storage/mount-devices task will attempt to mount all the drives in the list in in order. If the desired mount point is already in use then the code will skip attempting to assign it.

This design allows operators to specific multiple mount points or have a single point with multiple potential configurations.

  • rebuilt will wide and rebuild the mount
  • reset will rm -rf all files if UUID changes

example:

[
  {
    disk: "/dev/mmcblk0",
    partition: "/dev/mmcblkp1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "example"
  }
  {
    disk: "/dev/sda",
    partition: "/dev/sda1",
    mount: "/mnt/storage",
    type: "xfs",
    rebuild: true,
    reset: true,
    comment: "put something here"
  }
]

21.43.2.2.16. inventory/ProductName

From inventory/data, used for indexable data.

21.43.2.2.17. bootstrap-tools

This is an array of strings where each string is a package name for the base Operating System that is running on the DRP Endpoint, to be installed.

This is used by the bootstrapping system to add packages to the DRP Endpoint.

By default no packages are specified. If the operator sets this Param on the self-runner Machine object (either directly or via a Profile), then runs one of the bootstrap workflows, the packages will be installed.

An example workflow is universal-bootstrap.

Example setting in YAML:

bootstrap-tools:
  - package1
  - package2

Or in JSON:

{ "bootstrap-tools": [ "package1", "package2" ] }

21.43.2.2.18. drive-signatures

A map of drive to SHA1 signatures.

21.43.2.2.19. inventory/Family

From inventory/data, used for indexable data.

21.43.2.2.20. inventory/tpm-device

The device to use to query the TPM.

Defaults to /dev/tpm0

21.43.2.2.21. ansible/playbooks

Used by Task ‘ansible-playbooks-local’ task.

Runs the provided playbooks in order from the json array. The array contains structured objects with further details about the ansible-playbook action.

Each playbook MUST be stored in either 1. a git location accessible from the machine running the task. 1. a DRP template accessible to the running DRP

The following properties are included in each array entry:

  • playbook (required): name of the playbooks passed into ansible-playbook cli
  • name (required): determines the target of the git clone
  • either (required, mutually exclusive): * repo: path related to machine where the playbook can be git cloned from * template: name of a DRP template containing a localhost ansible playbook
  • path (optional): if playbooks are nested into a single repo, move into that playbook
  • commit (optional): commit used to checkout a specific commit tag from the git history
  • data (optionalm boolean): if true, use items provided in args
  • args (optional): additional arguments to be passed into the ansible-playbook cli
  • verbosity (optional, boolean): if false, suppresses output from ansiblie using no_log.
  • extravars (optional, string): name of a template to expand as part of the extra-vars. See below.

For example

[
  {
    "playbook": "become",
    "name": "become",
    "repo": "https://github.com/ansible/test-playbooks",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  },
  {
    "playbook": "test",
    "name": "test",
    "template": "ansible-playbooks-test-playbook.yaml.tmpl",
    "path": "",
    "commit": "",
    "data": false,
    "args": "",
    "verbosity": true
  }
]

Using extravars allows pulling data from Digital Rebar template expansion into github based playbooks. These will show up as top level variables. For example:

foo: "{{ .Param "myvar" }}",
bar: "{{ .Param "othervar" }"}

21.43.2.2.22. dr-server/install-drpid

If not set, will use “site-[Machine.Name]”

21.43.2.2.23. cluster/machines

====================== DEPRECATED in v4.6 ======================

21.43.2.2.24. inventory/NICSpeed

From inventory/data, used for indexable data.

This reports the speed as a number and the duplex state as true/false.

21.43.2.2.25. inventory/RaidDiskStatuses

From inventory/data, used for indexable data.

21.43.2.2.26. inventory/RaidTotalDisks

From inventory/data, used for indexable data.

21.43.2.2.27. inventory/TpmPublicKey

From inventory/data, used for indexable data.

This is the base64 encoded public key from the TPM.

If an error occurs, it will contain the error.

  • no device - no TPM detected
  • no tools - no tools were available to install

21.43.2.2.28. inventory/data

Stores the data collected by the fieds set in inventory/collect. If inventory/integrity is set to true, then used for comparision data.

21.43.2.2.29. cluster/profile

Name of the profile used by the machines in the cluster for shared information. Typically, this is simply self-referential (it contains the name of the containing profile) to allow machines to know the shared profile.

Note: the default value will be removed in v4.6. In versions 4.3-4.5, the default value was added to make it easier for operators to use clusters without having to understand need to create a cluster/profile value. While helpful, it leads to nonobvious cluster errors that are difficult to troubleshoot.

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.2.30. inventory/CPUSpeed

From inventory/data, used for indexable data.

21.43.2.2.31. inventory/CPUs

From inventory/data, used for indexable data.

21.43.2.2.32. inventory/tpm-fail-on-notools

If set to true, the system fail if the TPM tools are not present.

Defaults to false.

21.43.2.2.33. rsa/key-private

Private SSH Key (secure)

No default is set.

To preserve formatting, | is used instead of \n.

When writing the key inside a consuming task, you should use the following reference code:

tee id_rsa >/dev/null << EOF
{{.Param "rsa/key-private" | replace "|" "\n" }}
EOF
chmod 600 id_rsa

21.43.2.2.34. cluster/leader-count

Used by cluster-initialize add cluster/leader Param to cluster leaders

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.2.35. cluster/step

====================== DEPRECATED in v4.6 ======================

21.43.2.2.36. inventory/RaidControllers

From inventory/data, used for indexable data.

21.43.2.2.37. dr-server/initial-user

Defaults to rocketskates

21.43.2.2.38. inventory/NICDescr

From inventory/data, used for indexable data.

21.43.2.2.39. inventory/SerialNumber

From inventory/data, used for indexable data.

21.43.2.2.40. terraform/plan-automation

Like terraform/plan-templates, but to be added by other tasks instead of operators.

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Developer Notes: When used on a Machine this Param is expected to be temporary and will be automatically deleted during destroy action phase. This allows automation to set a default plan if none is present, but operators can override the plan by setting at the profile or global level.

Outputs from a plan will can be automatically saved on the Machine.

21.43.2.2.41. terraform/plan-templates

This is an array of strings where each string is a template that renders an Terraform Plan. They are built in sequence and then run from a single terraform apply

Outputs from a plan will can be automatically saved on the Machine.

21.43.2.2.42. cluster/manager

Added during cluster-setup to flag a cluster machines as manager.

Cluster Leader = machines in cluster defined as leaders Cluster Manager = DRP contexts in cluster that coordinate activity

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.2.43. inventory/RaidDiskSizes

From inventory/data, used for indexable data.

21.43.2.2.44. inventory/CPUCores

From inventory/data, used for indexable data.

21.43.2.2.45. inventory/integrity

Allows operators to compare new inventory/data to stored inventory/data on the machine. If true and values not match (after first run) then Stage will fail.

21.43.2.2.46. rsa/key-public

Public SSH Key.

No default is set.

21.43.2.2.47. terraform/plan-action

Verb used with Terraform activity to apply or destroy plans. If additional command params are required, append them to this param.

Defaults to apply.

21.43.2.2.48. ansible/output

Generic object that is constructed by the ansible-apply task if the playbook creates a file called [Machine.Name].json

For example, the following task will create the correct output:

- name: output from playbook
  local_action:
    module: copy
    content: "{{`{{ myvar }}`}}"
    dest: "{{ .Machine.Name }}.json"

21.43.2.2.49. dr-server/initial-password

Defaults to r0cketsk8ts

21.43.2.2.50. inventory/CPUType

From inventory/data, used for indexable data.

21.43.2.2.51. inventory/NICMac

From inventory/data, used for indexable data.

21.43.2.2.52. inventory/check

Using BASH REGEX, define a list of inventory data fields to test using Regular Expressions. Fields are tested in sequence, the first to fail will halt the script.

21.43.2.2.53. inventory/collect

Map of commands to run to collect Inventory Input Each group includes the fields with jq maps to store. For example; adding drpcli gohai will use gohai JSON as input. Then jq will be run with provided values to collect inventory into a inventory/data as a simple map.

To work correctly, Commands should emit JSON.

Special Options:

  • Change the command to parse JSON from other sources
  • Add JQargs to give hints into jq arguments before the parse string

Gohai example:

{
  gohai: {
    Command: "drpcli gohai",
    JQargs: ""
    Fields: {
      RAM: ".System.Memory.Total / 1048576 | floor",
      NICs: ".Networking.Interfaces | length"
    }
  }
}

21.43.2.2.54. cluster/leader

Added during cluster-setup to flag a cluster machines as leader.

Cluster Leader = machines in cluster defined as leaders Cluster Manager = DRP contexts in cluster that coordinate activity

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.2.55. context/name

Used by context-set as the target for the Param.BaseContext value.

Defaults to “” (Machine context)

21.43.2.2.56. inventory/DIMMs

From inventory/data, used for indexable data.

21.43.2.2.57. terraform/tfstate

JSON stored value of the terraform.tfstate file.

Copied from the machine before terraform-apply task. Saved back to machine after terraform-apply task.

If missing, Terraform is run without a state file!

21.43.2.2.58. cluster/escape

====================== DEPRECATED in v4.6 ======================

21.43.2.2.59. inventory/DIMMSizes

From inventory/data, used for indexable data.

21.43.2.2.60. ansible-inventory

Holds the value from the ansible-inventory task.

21.43.2.2.61. inventory/flatten

Creates each inventory value as a top level Params This is needed if you want to filter machines in the API by inventory data. For example, using Terraform filters.

This behavior is very helpful for downstream users of the inventory params because it allows them to be individually retried and searched.

Note

This will create LOTS of Params on machines. We recommend that you define Params to match fields instead of relying on adhoc Params.

21.43.2.3. profiles

The content package provides the following profiles.

21.43.2.3.1. bootstrap-elasticsearch

Profile to bootstrap elasticsearch

21.43.2.3.2. bootstrap-ipmi

Bootstrap Digital Rebar server for with IPMI plugin provider, and install the ipmitool for IPMI protocol operations.

21.43.2.3.3. bootstrap-kibana

Profile to bootstrap kibana

21.43.2.3.4. bootstrap-tools

Bootstrap Digital Rebar server with commonly required tools for content/plugins (eg ipmitool).

21.43.2.3.5. bootstrap-contexts

Bootstrap Digital Rebar server for advanced operation

  • Context Operations * Installs Docker and downloads context containers * Locks the endpoint to prevent accidential operations

This is designed to be extended or replaced with a site specific bootstrap that uses the base tasks but performs additional bootstrapping.

21.43.2.3.6. bootstrap-drp-endpoint

Bootstrap Digital Rebar server with the bootstrap operations for:

  • bootstrap-tools - install additional packages (*)
  • bootstrap-ipmi - install ipmitool package and ipmi plugin provider if needed
  • bootstrap-contexts - install docker-context plugin_providder, and contexts in installed content

Intended to be driven by a bootstrapping workflow on the DRP Endpoint (like universal-bootstrap).

Note

(*) The bootstrap-tools specification exists in the bootstrap-ipmi Profile definition. It is not explicitly called out here, as that would duplicate the pockage install process needlessly.

The bootstrap-ipmi Profile defines the Param bootstrap-tools to contain ipmitool. The Param is a composable Param, so all instances of the Param will be aggregated together in one list, instead of the regular order of precedence.

21.43.2.4. stages

The content package provides the following stages.

21.43.2.4.1. runner-service

This stage has been deprecated. Use drp-agent from drp-community-content instead.

21.43.2.4.2. ansible-playbooks-local

Invoke ansible playbooks from git.

Note

ansible/playbooks is a Required Param - List of playbook git repos to run on the local machine.

21.43.2.4.3. cluster-remove

DEPRECATED in v4.6

21.43.2.4.4. drive-signature-verify

Verifies signatures for drives.

21.43.2.4.5. inventory

Collects selected fields from Gohai into a simpler flat list.

The process uses JQ filters, defined in inventory/fields, to build inventory/data on each machine.

Also, applies the inventory/check map to the data and will fail if the checks do not pass.

21.43.2.4.6. cluster-add

DEPRECATED in v4.6

21.43.2.4.7. drive-signature

Builds a signature for each drive and stores that on the machine.

21.43.2.4.8. inventory-minimal

Set some of the initial inventory pieces that could be useful for other task in the discover stages.

21.43.2.4.9. cluster-setup

DEPRECATED in v4.6 Initialize Cluster using Digital Rebar Patterns

  1. will verify machine is in a shared profile
  2. will create shared profile if missing
  3. will add profile to machine if missing
  4. will make machine leader if no leader is present

21.43.2.4.10. cluster-sync

DEPRECATED in v4.6

21.43.2.4.11. cluster-step

DEPRECATED in v4.6

21.43.2.4.12. ansible-inventory

Collects ansible inventory data from ansible’s setup module. .. note:: This will attempt to install ansible if it is not already installed.

21.43.2.4.13. bootstrap-advanced

DEPRECATED: Use unversal-bootstrap with the bootstrap-contexts profile Bootstrap stage to build out an advanced setup

This augments the bootstrap-base. It does NOT replace it.

Bootstrap Operations: * Install Docker & Building Contexts if defined * Lock the Machine

21.43.2.5. tasks

The content package provides the following tasks.

21.43.2.5.1. stage-chooser

This task uses the stage-chooser/next-stage and stage-chooser/reboot parameters to change the stage of the current machine and possibly reboot.

This is not intended for use in a stage chain or workflow. But a transient stage, that can be used on a machine that is idle with a runner executing.

21.43.2.5.2. terraform-apply

Runs one or more Terraform Plan templates as defined by the terraform/plan-templates variable in the stage calling the task.

Requires an terraform context.

The terraform apply is only called once. All plans in the list are generated first. If sequential operations beyond the plan are needed, use multiple calls to this task.

Information can be chained together by having the plan output saved on the machine as Param.terraform-var/[output var].

Only Name, UUID, Address of the Machine are automatically passed into the plan; however, the plans can use the .Param and .ParamExists template to pull any value needed.

Terraform State is stored as a Param ‘terraform/tfstate’ the Machine after first execution. It is then retrieved for all subsequent runs so that Terraform is able to correctly use it’s state values.

For developers: use the terraform/plan-automation parameter if you needed have a task that uses terraform behind the scenes. This parameter is automatically removed from machines (not profiles!) automatically on destroy.

Notes: * having SSH keys requires using the ‘rsa-key-create’ generator task. * if creating cloud machines, use the ‘ansible-join-up’ task for join

21.43.2.5.3. ansible-inventory

Install ansible, if needed, and record the setup module ansible variables onto the machine as parameter named ansible-inventory.

21.43.2.5.4. cluster-setup

DEPRECATED in v4.6

Creates the Cluster for Digital Rebar

  1. will verify machine is in a shared profile
  2. will create shared profile if missing
  3. will add profile to machine if missing
  4. will make machine leader if no leader is present

21.43.2.5.5. elasticsearch-setup

A task to install and setup elasticsearch. This is a very simple single instance.

21.43.2.5.6. bootstrap-tools

If the Param bootstrap-tools contains an Array list of packages, then this task will install those packages on the DRP Endpoint, when used with one of the bootstrap workflows (eg universal-bootstrap).

By default, no packages are defined in the bootstrap-tools Param, so this task will no-op exit.

21.43.2.5.7. drive-signature

Generate a signature for each drive and record them on them machine.

21.43.2.5.8. network-lldp

Assumes Sledgehammer has LLDP daemon installed so that we can collect data.

Also requires the system to have LLDP enabled on neighbors including switches to send the correct broadcast packets.

21.43.2.5.9. cluster-initialize

For the v4.6 Cluster Pattern

Creates the Cluster for Digital Rebar

1. will create shared profile if missing (named after cluster manager machine) 1. identified machine running this task as the cluster manager 1. will add cluster machines to the shared profile based on cluster/filter 1. will make machine leader(s) if no leader is present

Note: your cluster/filter must apply to >0 machines!

For operational guidelines, see Multi-Machine Cluster Pattern v4.6+

21.43.2.5.10. cluster-remove

DEPRECATED in v4.6 Removes the Machine Uuid into the cluster profile.

Typical cluster workflow is add -> step -> remove -> sync

Where step or sync are optional.

See cluster-add task for more details.

21.43.2.5.11. cluster-step

DEPRECATED IN v4.6

Waits until machine is in Nth (or earlier) position.

Replies on cluster-remove to advance queue.

Typical cluster workflow is add -> step -> remove -> sync.

Where step or sync are optional. See cluster-add task for more details.

21.43.2.5.12. inventory-minimal

Sets the machine/type parameter for other tasks to use later.

21.43.2.5.13. storage-mount-devices

Uses array of devices from storage/mount-devices to attach storage to system.

If we’ve need a storage area, this task will mount the requested resources under /mnt/storage.

See Mount Attached Storage

21.43.2.5.14. ansible-join-up

Runs an embedded Ansible Playbook to run the DRP join-up process.

Requires an ansible context.

Expects to be in a Workflow that allows the joined machine to continue Discovery and configuration steps as needed.

Expects to have rsa-key-create run before stage is called and the public key MUST be on the target machine.

Idempotent - checks to see if service is installed and will not re-run join-up.

21.43.2.5.15. cluster-sync

DEPRECATED in v4.6

Waits until no machines remain. Use with cluster-remove.

Typical cluster workflow is add -> step -> remove -> sync

Where step or sync are optional.

See cluster-add task for more details.

21.43.2.5.16. inventory-check

Using the inventory/collect parameter, filters the JSON command output into inventory/data hashmap for rendering and validation.

Will apply inventory/check for field valation.

21.43.2.5.17. network-firewall

Requires that firewall-cmd or ufs to be enabled system.

Will reset the firewall at the end of the task.

21.43.2.5.18. drive-signature-verify

Using the signatures on the machine, validate each drive’s signature matches.

21.43.2.5.19. kibana-setup

A task to install and setup kibana. This is a very simple single instance.

21.43.2.5.20. rsa-key-create

Uses ssh-keygen to create an RSA public/private key pair.

Stores keys in rsa/key-public and rsa/key-private on the machine.

The public key (which contains newlines) is stored in format where newlines are converted to |.

Noop if the rsa/key-private exists.

21.43.2.5.21. ansible-apply

Runs one or more Ansible Playbook templates as defined by the ansible/playbook-templates variable in the stage calling the task.

Requires an ansible context.

Expects to have rsa-key-create run before stage is called so that rsa/* params exist.

Information can be chained together by having the playbook write [Machine.Uuid].json as a file. This will be saved on the machine as Param.ansible/output. The entire Machine JSON is passed into the playbook as the digitalrebar variable so it is available.

21.43.2.5.22. bootstrap-contexts

Multi-Step Docker-Context Boostrap.

This task is idempotent.

IMPORANT NOTES: * Stage installs Podman and the Docker-Context Plugin even if you do not have Docker-Contexts defined * Contexts are a licensed feature (the stage will abort if no license is exists)

1. Installs Podman/Docker on DRP Host 1. Installs the Docker-Context from Catalog 1. Creates the runner context if missing 1. Attempts to build & upload images for all Docker-Contexts 1. Initializes Podman/Docker for all Contexts

Podman/Docker build files are discovered in the following precidence order: 1. using Container Buildfile from files/contexts/docker-build/[contextname] 1. using URL from Meta.Imagepull 1. using Container Buildfile downloaded Context Meta.Dockerfile 1. using Docker Image from Context Meta.Dockerpull Once downloaded and built, they are uploaded to the correct files location

21.43.2.5.23. bootstrap-ipmi

This task bootstraps the DRP Endpoint to be functional for Baseboard Management Control capabilities by installing the ipmi plugin provider and installs the ipmitool package on the DRP Endpoint operating system.

This Task utilizes the external template bootstrap-tools.sh.tmpl, which must be configured with a list of packages. In this case, the bootstrap-tools array Param must be set to incldue ipmitool package.

Generally, the bootstrap process is controlled by a bootstrapping workflow (ege universal-bootstrap) which uses a Profile to expand the bootstrap workflow. This profile should contain the Param value setting. This is due to Tasks not carrying their own Param or Profile definitions.

21.43.2.5.24. context-set

This task allows stages to change the BaseContext for a machine as part of a larger workflow.

This is especially helpful creating a new machine using an endpoint context, such as Terraform to create a machine, and then allowing the machine to transition to a runner when it starts.

To code is written to allow both clearing BaseContext (set to “”) or setting to a known value. If setting to a value, code ensures that the Context exits.

21.43.2.5.25. ansible-playbooks-local

A task to invoke a specific set of ansible playbooks pulled from git.

Sequence of operations (loops over all entries: 1. collect args if provided 1. git clone repo to name 1. git checkout commit if provided 1. cd to name & path 1. run ansible-playbook playbook and args if provided 1. remove the directories

Note

Requires Param ansible/playbooks - List of playbook git repos to run on the local machine.

21.43.2.5.26. dr-server-install

Installs DRP Server using the Ansibile Context. Sets DRP-ID to Machine.Name

NOTE: All connections from FROM THE MANAGER, the provisioned site does NOT need to connect to the manager.

LIMITATIONS: * firewall features only available for Centos family

The primary use cases for this task are

1. Creating a remote site for Multi-Site-Manager 1. Building Development installations for rapid testing

Requires install.sh and zip artifacts to be bootstrap/*

Will transfer the DRP license to the machine being created.

If DRP is already installed, will restart DRP and update license

For operators,this feature makes it easy to create new edge sites using DRP Manager.

21.43.2.5.27. cluster-add

DEPRECATED in v4.6

Injects the Machine Uuid into the cluster/machines param. This is the first step in a the cluster workflow. Typical cluster workflow is add -> step -> remove -> sync where step or sync are optional.

The cluster-* stages provide basic queue management for machines with a shared profile. That profile must have a cluster/profile field that names the profile. Once the queue is running, machines are tracked in the cluster/machines parameter.

  • cluster-add enqueues the machine
  • cluster-step services the queue (takes the next cluster/step in line)
  • cluster-remove dequeues the machine (should be done AFTER servicing)
  • cluster-sync forces all machines to wait until the queue is empty

If you are doing a rolling upgrade, use the following sequence:

  • cluster-add -> cluster-step -> [do work] -> cluster-remove

If you are coalescing work over multiple machines, use the following sequence:

  • cluster-add -> [do work] -> cluster-remove -> cluster-sync

The cluster-add process will automatically elect cluster/leader-count leaders during the enqueue step. These cluster/leaders will be be maintained after the run is over and require manual editing to remove. It is possible to pre-create the leasder list.

It is possible to do multiple add/remove steps in a single workflow

If your cluster is NOT behaving, you can rescue or escape from the cluster stages by setting cluster/escape on the profile. This will interrupt a cluster in process. Using escape=0 will allow the cluster stages to exit normally and noop while using escape=1 will exit with an error and the stages will stop progressing. Escape=1 is safter since downstream actions are blocked.

21.43.2.5.28. workflow-pause

This task will set the machine to NOT runnable, but leave the runner running. This will act as a pause in the workflow for other system to restart.