21.36. proxmox - Proxmox Install and Configure

The following documentation is for Proxmox Install and Configure (proxmox) content package at version v4.8.0-alpha00.25+g8e224a5bec085c80968ec27e0332c339a65d0659.

This content pack manages deployment of Proxmox nodes and configuration for RackN student training labs on top of the installed Proxmox hypervisor.

At some point in the future, this will be broken out in to multiple content packs; one that just manages the Proxmox installation and configuration, and one that implements student training labs independently on top of installed Proxmox system(s).

This content pack utilizes the Debian 10 (Buster) bootenv as a baseline for the Proxmox installation. The Proxmox released “appliance ISO” is not network installable by default, and requires a fair amount of work to rip apart the ISO and rebuild it to make it network installable.

21.36.1. Installation of Proxmox

There are several workflows defined in this content pack, some of which overlap with each other (this is bad). A “ground up” installation is (usually) performed with the proxmox-buster-install Workflow. Several of the workflows allow installing Proxmox on top of an existing Debian system (success is not guaranteed, depending on the state of the Debian system), others provide install and config Stages.

Please see the Workflow Definitions section below for an overview of the current workflows provided in this content.

This workflow requires the drp-community-content to install Debian 10 (Buster) on the target systems. Configuration of Proxmox is performed via RackN Workflow stages. Please start by reviewing the provided workflow before attempting to rework the workflows for your use case.

The goal of the workflow is to perform initial installation and setup of Proxmox with a basic Virtualized network configuration on top of Proxmox.

Debian 10 prerequisites require human input to the packaging system. To automate this, we utilize the debconf-set-selections mechanism to preseed the answers to those packages (eg Samba and Postfix). The preseed debconf selections template is defined by the proxmox/debconf-selections-template Param. By default it defines the template as proxmox-debconf-selections-template.tmpl. To specify customizations for the selectins, set the Param to point to your own custom template with selections.

The debconf selections by default answer values for Postfix and Samba. If the Proxmox host needs these set to interact with external infrastructure (Eg send outbound mail), you must adjust these appropriately.

21.36.2. Debian OS Network Configuration

Currently, the RackN netwrangler Network Configuration tooling does not support network configuration for Debian / Proxmox systems. As a consequence, Tasks and tooling have been written to support building up the Base OS network configuration to support the topology requirements for given virtualized use cases.

The primary concern is to integrate the hypervisors network configuration and IP addressing needs with the virtualized network topology and IP addressing requirements for the virtualized infrastructure built on top of Proxmox.

To address that, the Proxmox workflows support custom network topology configuration of the base hypervisor OS (Debian) with the use of the Flexiflow system to inject tasks to handle network reconfiguration. This is handled by setting an array of tasks in the proxmox/flexiflow-buster-install Param, which drives the flexiflow-buster-install Stage to inject tasks.

Several prebuilt network topology Tasks exist that may meet the operator needs. These are as follows:

  • network-add-nat-bridge
  • network-convert-interface-to-bridge
  • network-simple-bridge-with-addressing
  • network-simple-single-bridge-with-nat

More than one network reconfiguration task can be specified in the array to combine multiple tasks to generate the final desired configuration. Please review each Task to understand what is being changed to support network topology use cases.

Due to how network reconfigurations occur in Debian, it is possible that some tasks may require forcing a reboot, or detailed understanding on how to tear down existing configurations to reach a desired topology. For example, if the base OS is using Bonded interfaces that need to be broken and reconfigured, this requires a lot of work to correctly tear down the bonds and build things back up again, it is safeer/cleaner to reboot the system in some cases.

Custom templates can be written to effect specific network topology changes as required, if the provided tasks/templates are not sufficient for your use case. Please review the Task and associated Templates to understand how to customize via the Flexiflow task injection.

21.36.3. Virtual Machine IP Address Assignments

Ultimatey, how the Virtual Machines on the Proxmox system obtain their IP addressing drives a lot of the previous OS Network Configuration steps. The primary methods are:

  • Completely private IP addressing, isolated from external networks - requires hypervisor/OS SSH tunneling, VPNs, or custom NAT translations to make the VMs reachable from the outside
  • Virtual Machine Layer 3 IP networks, routed by external network devices, to the external IP of the Hypervisor
  • DHCP or Static IP assignments from the Hypervisors Layer 2 / Layer 3 network, bridging the hypervisor interface directly to the VMs networks

In some cases, the NAT translation network configuration tasks in these workflows can help with the first case (private networks).

If the VMs are addressable on the external network of the Hypervisor, then an external DRP Endpoint can provision the addressable VMs via standard PXE/TFTP boot and installation workflows.

If the VMs are not addressable directly on the external networks, a DRP endpoint may be installed on the Hypervisor OS to provision the VMs.

For more complete details of these differences, see the following DRP Deployment section.

21.36.4. DRP Deployment of Virtual Machines

There are uses cases where Digital Rebar Platform (DRP) may be involved in building up Proxmox and the virtualized machine resources. This allows for building CI/CD test system, developer test systems, or other uses that drive DRP deployment and testing.

RackN currently uses this system to build customer training lab scenarios.

#SYG# - add this image example rackn_example_proxmox_training_lab_environment.png

The proxmox content pack supports an optional deployment of a RackN DRP Endpoint on the base Proxmox hypervisor to provision Virtual Machines.

See the Stage and Task named proxmox-drp-install for deatils. The Param proxmox/install-drp-on-hypervisor to control optional install or not.

21.36.5. Workflow Definitions

Here is a brief summary of the workflows provided in the Proxmox content pack.

  • proxmox-buster-install: Install Debian 10 (Buster) linux, install Proxmox, setup admin account, create storage configs, enable KVM nested mode, configure base hypervisor network
  • proxmox-install-and-lab-setup: Install Debian 10 (Buster) linux, install Proxmox, setup admin account, create storage configs, enable KVM nested mode, configure base hypervisor network, install RackN lab scenario on top of Hypervisor, install DRP on the Hypervisor if requested
  • proxmox-lab-centos-image: only deploy Centos Image; typically to the student DRP Endpoint VMs
  • proxmox-lab-create: only create the RackN lab scenario, optionally install DRP endpoint on Hypervisor, and optionally install DRP in the student VMs
  • proxmox-lab-destroy: Destroy the RackN lab scenario, but leave the DRP endpoint in Hypervisor if it was deployed
  • proxmox-lab-drp-setup: only inject SSH keys, and start the DRP Agent (runner)
  • proxmox-only-install: no Debian install, only Proxmox install and configure

Note

These will change to standalone single use workflows, which will be incorporated in to Universal wrapped workflows. This allows for Universal workflow chain maps to mix-n-match workflows combinations.

21.36.6. Profiles with Example Usage

Review the Profiles provided in the Proxmox content pack for example usage scenarios.

21.36.7. Future To Do Items

This is a list of future enhancements that are planned for the Proxmox content. Most of these items are realignment of the Proxmox content to be better supported in Universal Workflow. Additionally, integrations with forthcoming (as of Sept 2021) WorkOrders sysstem for VM management and Cluster configuration will be desirable.

Last - VM managment may be driven by the cloud-wrappers patterns, extending from Public cloud resource management to include Private Cloud control.

Current ToDo list:

  • separate Proxmox and RackN Lab components in to separate content packs
  • restructure the workflows as individual non-overlapping workflows as follows:
    • base OS install and customization (or move debconf selection handline to community content)
    • base OS network topology reconfiguration (preferably netwrangler should support this instead)
    • proxmox package installation
    • proxmox configuration and setup
    • generic VM create capability (this may move to new WorkOrders system)
    • generic VM destroy capability (this may move to new WorkOrders system)
    • RackN usage scenarios
      • lab create
      • lab destroy
  • move the newly resturcutred workflows to Universal wrapped workflows
  • implement Proxmox Cluster management between multiple hypervisors
  • enable more Storage configuration capabilities (e.g. shared Ceph storage, zfs, nfs)
  • move to Netwrangler network topology management for Hypervisor network config
  • possibly integrate cloud-wrappers to drive VM management on top of Proxmox hosts or clusters

21.36.8. Object Specific Documentation

21.36.8.1. workflows

The content package provides the following workflows.

21.36.8.1.1. proxmox-setup-lab

Sets up the bridge networks and creates virtual machine nodes on a proxmox host.

Creates the Proxmox Accounts.

Installs DRP on the Hypervisor if proxmox/install-drp-on-hypervisor is set to true. If DRP is not installed, it is assumed that the DRP VMs will be bootstrapped from an external DRP endpoint, or manually installed with an OS.

21.36.8.1.2. proxmox-buster-install

Installs Debian 10 (Buster) via standard RackN BootEnv install, using preseed/package based (Debian Installer, d-i) method.

Once install completes, while still inside Debian Installer, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-buster-install is added to this workflow. By setting the Param proxmox/flexiflow-buster-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-buster-install as follow (in JSON example):

["network-convert-interface-to-bridge"]

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

["network-convert-interface-to-bridge","network-add-nat-bridge"]

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.

21.36.8.1.3. proxmox-lab-create

Sets up the bridge networks and creates virtual machine nodes on a proxmox host.

Creates the Proxmox Accounts.

Installs DRP on the Hypervisor if proxmox/install-drp-on-hypervisor is set to true. If DRP is not installed, it is assumed that the DRP VMs will be bootstrapped from an external DRP endpoint, or manually installed with an OS.

21.36.8.1.4. proxmox-only-install

Starts the Proxmox install. Assumes that the install is on an existing/already built Debian 10 (Buster) system, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-buster-install is added to this workflow. By setting the Param proxmox/flexiflow-buster-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-buster-install as follow (in JSON example):

["network-convert-interface-to-bridge"]

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

["network-convert-interface-to-bridge","network-add-nat-bridge"]

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.

21.36.8.1.5. proxmox-install-and-lab-setup

Installs Debian 10 (Buster) via standard RackN BootEnv install, using preseed/package based (Debian Installer, d-i) method.

Once install completes, while still inside Debian Installer, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-buster-install is added to this workflow. By setting the Param proxmox/flexiflow-buster-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-buster-install as follow (in JSON example):

["network-convert-interface-to-bridge"]

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

["network-convert-interface-to-bridge","network-add-nat-bridge"]

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.

After the base install is completed, the the Lab setup will be performed. This includes (optionally) installing DRP on the Hypervisor, setting up the Lab target machine hypervisor bridges and network, creating Lab accouints, and finally - creating the Virtual Machines within Proxmox.

21.36.8.1.6. proxmox-install-and-setup

Installs Debian 10 (Buster) via standard RackN BootEnv install, using preseed/package based (Debian Installer, d-i) method.

Once install completes, while still inside Debian Installer, update the system, add the Proxmox repositories, provide a minimal preseed set of answers (for Samba and Postfix packages), and then do a Proxmox install of the latest stable version.

The special stage flexiflow-buster-install is added to this workflow. By setting the Param proxmox/flexiflow-buster-install to your target machine, the individually listed Tasks will be injected in to the Workflow dynamically.

This is used to flexibly inject network config/reconfig Tasks to allow for dynamic use of the workflow. For example, setting the Param proxmox/flexiflow-buster-install as follow (in JSON example):

["network-convert-interface-to-bridge"]

Will inject that named task to modify the network by converting the Boot interface to be enslaved by the Bridge for Virtual Machines.

Another example (again, in JSON format):

["network-convert-interface-to-bridge","network-add-nat-bridge"]

This will perform the primary boot interface conversion to be enslaved by the bridge, but also bring up a NAT Masquerade bridge to attach machines to.

After the base install is completed, the the Lab setup will be performed. This includes (optionally) installing DRP on the Hypervisor, setting up the Lab target machine hypervisor bridges and network, creating Lab accouints, and finally - creating the Virtual Machines within Proxmox.

21.36.8.1.7. proxmox-lab-destroy

Destroyes the built lab of Virtual Machiens on the Proxmox hypervisor host.

21.36.8.2. bootenvs

The content package provides the following bootenvs.

21.36.8.2.1. proxmox-6-install

This BootEnv installs the CentOS 7 Minimal operating system. Both x86_64 and aarch64 architectures are supported.

21.36.8.2.2. proxmox-6-rackn-install

This BootEnv installs the Proxmox system. This is a rebuilt image from the stock ISO to support PXE installation process, as the community released ISO does not support PXE by default.

21.36.8.3. params

The content package provides the following params.

21.36.8.3.1. network-convert-interface-to-bridge-template

The name of the template to utilize to configure the NAT Add Bridge network with Addressing (network-convert-interface-to-bridge) network configuration.

The default is network-convert-interface-to-bridge.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/lab-drp-external-bridge.

21.36.8.3.2. proxmox/storage-device

DEPRECATED: Use proxmox/storage-config instead.

This param is used to define the disk that the base storage volume will be created on. It defaults to /dev/sdb if not otherwise defined.

21.36.8.3.3. proxmox/lab-base-tag

The base tag that is assigned to various resources used in the content pack when configuring the student lab. For example, network bridge devices.

This is also used when tearing things down.

Note

Do not add a trailing dash, one will be inserted between the prefix and the numerical designator for the resource.

The default value is student, which will produce bridge devices like br-student-1.

21.36.8.3.4. proxmox/lab-drp-internal-subnet

The default is 192.168.2.0/24.

21.36.8.3.5. proxmox/lab-drp-sshkey-private

This param is used to define lab DRP systems private key.

21.36.8.3.6. proxmox/vm-machine-nic

Must select one of the Proxmox supported NIC models from the list. The default is e1000. If you are running ESXi on top of Proxmox, you may need to change this (eg to vmxnet3 - especially for ESXi 7.x).

Additional documentation and details can be found on the Proxmox Wiki, at:

21.36.8.3.7. proxmox/debconf-selections-template

Defines the template to use during installation for the debconf-set-selections process. To customize, create a new template with the correctly formatted debconf-set-selections values, and set this Param to the name of your custom template.

By defaul the template named proxmox-debconf-set-selections.tmpl will be used.

21.36.8.3.8. proxmox/lab-drp-external-subnet

This is an IP address that MUST BE routable inside your organization, to reach the DRP Endpoints allcoated on the Hypervisor. The Subnet will be added on the Hypervisor and each DRP endpoint will be provisioned with an IP address from this network.

IF this method is used, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

The default is 192.168.1.0/24.

If you wish to assign IP addresses to your VMs via a bridged interface on the Proxmox Hypervisor, DO NOT use this method, instead, use the Network configuration task named network-simple-bridge-with-addressing.

The subnet must be in CIDR Notation (eg 1.2.3.0/24), and the Network address set in the CIDR (eg the “.0” part). The Hypervisor will be assigned the first IP address in the network, and used as the Default Route for the DRP Endpoint Virtual Machines.

21.36.8.3.9. proxmox/vm-machine-os-type

Must select one of the Proxmox supported OS Type models from the list. The default is l26 (Linux 2.6 or newer kernel).

Additional documentation and details can be found on the Proxmox Wiki, at (search for ‘ostype: <l24’ to find them):

The list of supported OS Types is as follows:

  • other = unspecified OS
  • wxp = Microsoft Windows XP
  • w2k = Microsoft Windows 2000
  • w2k3 = Microsoft Windows 2003
  • w2k8 = Microsoft Windows 2008
  • wvista = Microsoft Windows Vista
  • win7 = Microsoft Windows 7
  • win8 = Microsoft Windows 8/2012/2012r2
  • win10 = Microsoft Windows 10/2016
  • l24 = Linux 2.4 Kernel
  • l26 = Linux 2.6 - 5.X Kernel
  • solaris = Solaris/OpenSolaris/OpenIndiania kernel

21.36.8.3.10. network-simple-bridge-with-addressing-template

The name of the template to utilize to configure the Network Simple Bridge with Addressing (network-simple-bridge-with-addressing) network configuration.

The default is network-simple-bridge-with-addressing.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/lab-drp-external-bridge.

21.36.8.3.11. network-simple-single-bridge-with-nat

The name of the template to utilize to configure the single flat bridge setup, with outbound NAT translation for the VMS.

The default is network-simple-single-bridge-with-nat

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/lab-drp-external-bridge.

21.36.8.3.12. proxmox/lab-student-count

This param is used to define the number of students to add to each Proxmox host.

21.36.8.3.13. proxmox/storage-name

DEPRECATED: Use proxmox/storage-config instead.

This param is used to define the Thin Pool and LVM Logical Volume name that will be created on the PVE node.

It defaults to local-lvm, which is used when creating VMs. Ensure these values match.

21.36.8.3.14. proxmox/strip-kernel-packages

The default package list to remove from the final installed system. The Proxmox install guides optional suggests removing the stock kernel packages. By default, this installer workflow does NOT strip these packages. To strip them, set proxmox/strip-kernel to true, and ensure this Param has the correct set of values for your installation.

The default value is linux-image-amd64 linux-image-4.19*.

Note

If a regex is used, you must single quote protect the regex from the shell interpretting it as a wildcard. See the default value setting for this param as a valid example.

21.36.8.3.15. proxmox/vm-drp-os-type

Must select one of the Proxmox supported OS Type models from the list. The default is l26 (Linux 2.6 or newer kernel).

Additional documentation and details can be found on the Proxmox Wiki, at (search for ‘ostype: <l24’ to find them):

The list of supported OS Types is as follows:

  • other = unspecified OS
  • wxp = Microsoft Windows XP
  • w2k = Microsoft Windows 2000
  • w2k3 = Microsoft Windows 2003
  • w2k8 = Microsoft Windows 2008
  • wvista = Microsoft Windows Vista
  • win7 = Microsoft Windows 7
  • win8 = Microsoft Windows 8/2012/2012r2
  • win10 = Microsoft Windows 10/2016
  • l24 = Linux 2.4 Kernel
  • l26 = Linux 2.6 - 5.X Kernel
  • solaris = Solaris/OpenSolaris/OpenIndiania kernel

21.36.8.3.16. proxmox/lab-drp-boot-order

This param is used to define the value of the configuration for type drp virtual machines.

Boot order has changed between Proxmox 6.x and 7.x. In 6.x, the simplified string “ncd” defined network, cdrom, first disk as the boot order. Proxmox 7.x requires a more complex string, which also contains semicolons, which require special handling (quoting) in scripts to protect them.

Examples:

  • Proxmox 6.x: ncd
  • proxmox 7.x: order=net0;scsi0

Additionally, more complex configuraitons are possible (eg specifying boot wait timeouts, etc.).

The default value is now set to:

  • order=net0;scsi0

21.36.8.3.17. proxmox/lab-license-instructor-drp

The RackN license to use on the instrcutor DRP Endpoint, if installed. This is controlled by the param proxmox/install-drp-on-hypervisor. This license will be installed on this system if the DRP Endpoint is installed.

21.36.8.3.18. proxmox/storage-config

This param is used to define the configuration for the various backend storage on a Proxmox host.

The Param is an object, with a Key for the type of storage, based on supported setup configurations in Tasks, following with another key that is specific to the given instace configuraiton. This allows for support of multiple objects of the same type.

lvmthin:
  local-lvm:
    device: /dev/sdb
    vgname: pve
    thinpool: data
    content: rootdir,images
    size: 95%FREE
    maxfiles: "7"
dir:
  local-images:
    path: /var/lib/images
    content: images,iso
  local:
    path: /var/lib/vz
    content: iso,vztmpl,backup
    format: qcow2,vmdk
  backup:
    path: /mnt/backup
    content: backup
    prune-backups: keep-all=0,keep-daily=7,keep-hourly=24

Warning

You may not specify a dir type with name of local, this is a reserved Dir type of storage. Doing so will cause a fatal error in the Workflow and stop workflow processing.

The following types have not yet been implemented yet.

  • lvm, zfspool, btrfs, nfs, cifs, pbs, glusterfs, cephfs, iscsi, iscsidirect, rbd, zfs

For Storage configuration details, see the Proxmox documentation at:

Note

This object type does not yet contain a Schema for validation of the configuration. Field values in each segment directly map to Proxmox Storage configuration directives, use the documentation for guidelines, and follow the example defined above. However, some values may be helpers specific to a task (eg device: /dev/sdb, defines to create the LVM Thinpool using the backing Device specified).

A future implementation example for ZFS Pool configuration might look like:

zfspool:
  local-zfs:
    pool: rpool/data
    sparse: true
    content: images,rootdir

If no values are specified (the default), then the product default of LVM-Thin type storage will be set up based on the proxmox/storage-device and proxmox/storage-name Param settings.

21.36.8.3.19. proxmox/lab-drp-memory

This param is used to define the value of the configuration for type drp virtual machines.

21.36.8.3.20. proxmox/lab-machines-boot-order

This param is used to define the value of the configuration for type machines virtual machines.

Boot order has changed between Proxmox 6.x and 7.x. In 6.x, the simplified string “ncd” defined network, cdrom, first disk as the boot order. Proxmox 7.x requires a more complex string, which also contains semicolons, which require special handling (quoting) in scripts to protect them.

Examples:

  • Proxmox 6.x: ncd
  • proxmox 7.x: order=net0;scsi0

Additionally, more complex configuraitons are possible (eg specifying boot wait timeouts, etc.).

The default value is now set to:

  • order=net0;scsi0

21.36.8.3.21. proxmox/drp-wait-timeout

Changes the timeout wait for all DRP VMs to be created. Some particularly slow hardware may make this process longer than expected. The default value is 600 seconds.

21.36.8.3.22. proxmox/lab-drp-external-interface

The DRP Endpoint virtual machine external interface that should be publically accessible to the operators. This interface should ultimately obtain an IP address that is routable to for the DRP operators.

21.36.8.3.23. proxmox/lab-drp-internal-interface

The virtual machine internal interface that target machines will be connected to. The DRP Endpoint will attach a private interface to this interface as well.

21.36.8.3.24. proxmox/strip-kernel

Setting this Param value to true will cause the installer to remove the packages specified by the proxmox/strip-kernel-packages param. This is an optional step and not required for Proxmox installation.

The default value is false (do NOT strip the kernel packages off of the system).

21.36.8.3.25. proxmox/vm-drp-storage-name

This param is used to define the Storage name that will be used to back the Lab DRP Virtual machines.

It defaults to local, which is automatically created as the default Storage location on a Proxmox system. This backs the Virtual Machines volumes in the filesystem of the local Proxmox node.

This Param uses the .ParamExpand method, which means that the operator can specify Golang templating constructs, which will be rendered uniquely based on the Machine context the task is running in. This allows for Storage types that are uniquely defined based on the Machine information (eg name, etc).

21.36.8.3.26. proxmox/install-drp-on-hypervisor

Depending on the network configuration used on the Hypervisors, the DRP Endpoint VMs may or may not need to be provisioned from the Hypervisor.

In the event that the DPR Virtual Machines do not obtain DHCP and PXE from outside of the Hypervisor, then the operator will have to arrange to install an OS on the DRP VMs. The main workflows include a DRP Install on the Hypervisor task.

If this Param is set to true (NOT the default), then DRP will be installed in a very opinionated configuration.

21.36.8.3.27. proxmox/lab-nat-bridge

This param is used to define the name of the Bridge that will be created for attaching Virtual Machines which should be NAT (Masqueraded). It will be attached to the primary bridge defined by proxmox/lab-drp-external-bridge Param.

NAT Masquerading will be set up for proxmox/lab-nat-subnet. There are no DHCP services setup automatically. Either statically assign IP addresses from that range, or enable a DRP Subnet for that range on the proxmox/lab-nat-bridge interface.

The default is vmnat0.

21.36.8.3.28. proxmox/vm-drp-storage

Must select one of the Proxmox supported Storage models from the list. The default is SCSI megasas.

Additional documentation and details can be found on the Proxmox Wiki, at:

There are 3 types of controllers - ide, sata, and scsi. IDE and Sata do not have any additional configuration options. Anything else listed is a SCSI controller.

21.36.8.3.29. proxmox/vm-machine-storage-name

This param is used to define the Storage name that will be used to back the Lab target Virtual machines.

It defaults to local, which is automatically created as the default Storage location on a Proxmox system. This backs the Virtual Machines volumes in the filesystem of the local Proxmox node.

This Param uses the .ParamExpand method, which means that the operator can specify Golang templating constructs, which will be rendered uniquely based on the Machine context the task is running in. This allows for Storage types that are uniquely defined based on the Machine information (eg name, etc).

21.36.8.3.30. proxmox/drp-timeout-kill-switch

This is an emergency control outlet. If this parameter is set to true, and if the machine is in the proxmox-drp-provision-drp task … and in the timeout wait loop in the Shell execution… it will evalute this Param, and exit from the loop with an error code.

This param will be attempted to be removed from the machine prior to exiting with an error message.

21.36.8.3.31. proxmox/lab-drp-external-bridge

This param is used to define the value of the configuration for type drp virtual machines.

21.36.8.3.32. proxmox/package-selections

This parameter defines the Package selection list to install initially. This list should contain at least proxmox-ve and any necessary supporting packages.

If the operator overrides the Default values specified in this Param, all packages must be specified in the updated Param values.

The list is a space separated string that must contain valid Debian package names. These packages must be available in the default repos unless additional apt repos have been setup and initialized prior to this task run.

Note

The default workflows assume postfix and samba packages are installed (as specified by proxmox requirements). There are special tasks for staging apt-set-selections to automate these package installation successfully. If additional packages requiring input are added, the operator must implement a set of apt-set-selections appropriate to that package.

This Param defaults to:

  • proxmox-ve postfix openvswitch-switch open-iscsi vim wget curl jq ifupdown2 lldpd

If the operator sets any values to this Param, you MUST ALSO INCLUDE THESE AS THEY ARE REQUIRED.

This should likely be adjusted in the future to not allow these to be overridden.

21.36.8.3.33. proxmox/lab-drp-external-dns

The DNS servers to be assigned to the DRP Endpoints on the Hypervisor.

Defaults to 1.1.1.1,1.0.0.1. Comma separated list, no spaces.

21.36.8.3.34. proxmox/lab-machines-cores

This param is used to define the value of the configuration for type machines virtual machines.

21.36.8.3.35. proxmox/iso

The URL that the ISO of the Proxmox install can be found at. This ISO will be modified to include the ISO as /proxmox.iso to enable network install of Proxmox. By default the ISO is not capable of installing via HTTP network path.

21.36.8.3.36. proxmox/lab-drp-vms-provision

This param will enable or disable the DRP VMs OS provisioning process in the proxmox-drp-provision-drp task. By default, the DRP VMs OS will be provisioned, after they are auto-discovered. Setting this to false will disable the OS provisioning step.

21.36.8.3.37. proxmox/lab-machines-disk

This param is used to define the value of the configuration for type machines virtual machines.

21.36.8.3.38. proxmox/lab-machines-memory

This param is used to define the value of the configuration for type machines virtual machines.

21.36.8.3.39. proxmox/lab-nat-subnet

The IP Subnet to NAT Masquerade for on proxmox/lab-nat-bridge (defautls to vmnat0). There are no DHCP services setup automatically. Either statically assign IP addresses from that range, or enable a DRP Subnet for that range on the proxmox/lab-nat-bridge interface.

The default is 192.168.1.0/24.

21.36.8.3.40. proxmox/lab-reinstall-drp-on-hypervisor

Setting this Param value to true will cause the task proxmox-drp-install to remove an existing DRP Endpoint (dr-provision) service from the system, then install a fresh version of the Endpoint.

The default value is false (leave existing dr-provision service in place and exit)

21.36.8.3.41. proxmox/flexiflow-create-storage

This Param controls setting up the Storage on a Proxmox host. It should be a list of Tasks responsible for setting up the specific supported types of storatge on the Proxmox node. The Flexiflow system will inject each task in to the running workflow to implement the changes.

The Storage types will be implemented in order as specified in the list.

Documentation on the Proxmox storage types can be found at:

Supported storage types are defined as a Task, which implements each various Storage system configuration that Proxmox supports. See the above page for Supported types.

Storage types implementation status:
  • lvmthin: supported (the default method)
  • dir: supported
  • lvm: not implemented
  • zfspool: not implemented
  • btrfs: not implemented
  • nfs: not implemented
  • cifs: not implemented
  • pbs: not implemented
  • glusterfs: not implemented
  • cephfs: not implemented
  • iscsi: not implemented
  • iscsidirect: not implemented
  • rbd: not implemented
  • zfs: not implemented

See each specific task for configuration values/settings to configure each type.

21.36.8.3.42. proxmox/lab-drp-disk

This param is used to define the value of the configuration for type drp virtual machines.

21.36.8.3.43. proxmox/lab-drp-external-domainname

The DNS Domain Name to be used in the Subnet specification for the DRP Endpoints externally accesisble network interface.

Defaults to pve-lab.local

21.36.8.3.44. proxmox/lab-drp-sshkey-public

This param is used to define the ssh public key half that should be installed in the DRP systems for student access.

21.36.8.3.45. proxmox/lab-pvesh-extra-config-drp

Allows an operator to inject extra configuration directives in to the pvesh command that builds the DRP virtual machine.

21.36.8.3.46. proxmox/lab-student-vms

This param is used to define the number of student vms to add to each Proxmox host.

21.36.8.3.47. network-add-nat-bridge-template

The name of the template to utilize to configure the NAT Add Bridge network with Addressing (network-nat-add-bridge) network configuration.

The default is network-add-nat-bridge.cfg.tmpl

This will be written to /etc/network/interfaces.d/$BRIDGE where BRIDGE is defined by the Param proxmox/lab-nat-bridge.

21.36.8.3.48. proxmox/data-profile

This parameter defines the Profile name for the profile that will carry dynamic data generated through the install process. For example, the generates SSH key halves will be saved to this profile.

Warning

It is critical that this is set to a unique value if you are maintaining multiple separate Proxmox deployments.

21.36.8.3.49. proxmox/storage-skip-if-exists

Setting this Param value to true will cause the Storage tasks to exit with an error condition, stopping workflow, if certain conditions arise.

For example, if creating a dir type storage, and if that storage already exists, then this Param set to true will cause the Workflow to stop with a fatal error. If however, the operator desires to continue on, and skip trying create the storage type, then set the value to false.

NOTE that the assumption in the above example is that the storage provider is fully configured correctly. No attempt to additionally provide configuration settings will be made.

The default value is true (do exit Workflow on errors).

21.36.8.3.50. proxmox/vm-drp-nic

Must select one of the Proxmox supported NIC models from the list. The default is e1000. If you are running ESXi on top of Proxmox, you may need to change this (eg to vmxnet3 - especially for ESXi 7.x).

Additional documentation and details can be found on the Proxmox Wiki, at:

21.36.8.3.51. proxmox/vm-machine-storage

Must select one of the Proxmox supported Storage models from the list. The default is megasas.

Additional documentation and details can be found on the Proxmox Wiki, at:

There are 3 types of controllers - ide, sata, and scsi. IDE and Sata do not have any additional configuration options. Anything else listed is a SCSI controller.

21.36.8.3.52. proxmox/lab-drp-cores

This param is used to define the value of the configuration for type drp virtual machines.

21.36.8.3.53. proxmox/flexiflow-buster-install

This Param contains an Array of Strings that will define which tasks to dynamically add to the flexiflow-buster-install workflow on first boot.

This is generally used to specify the network setup stages in the base Hypervisor, before creating any target DRP or Machine VMs. For example, the following tasks set network configuration up:

  • network-simple-bridge-with-external-addressing

To create a simple bridge, with an IP address assigned block to allocate to the “external” interfaces of the DRP Endpoint Virtual Machines. IP addressing for the DRP Endpoints must be provided by the external network (external to the Hypervisor), either via DHCP, or static assignment. The DRP endpoints are essentially bridged to the Hypervisors physical external network.

Another example:

  • network-convert-interface-to-bridge

The above migrates the IP Address on the base interface on the Proxmox Hypervisor to a bridge (identified by the Param proxmox/lab-network-external-interface), the DRP Endpoint VMs external interface are then attached to this bridge.

Another example:

  • network-simple-single-bridge-with-nat

The above assumes that (typically) vmbr0 carries the hypervisors primary IP address, and that Machines will be directly attached to this bridge. The machines will use a secondary network space (defined by proxmox/network-external-subnet), but will be setup to NAT to the Bridges IP address for outbound internet connectivity.

No inbound NAT mappings are setup in this mode. If inbound IP connectivity to the VMs is required, then external routers need to route the proxmox/network-external-subnet to the Hypervisors IP, or additional NAT inbound mappings need to be arranged.

  • network-add-nat-bridge

The above creates an additional bridge to abstract the connection from the Hypervisors main NIC and Bridge, connecting the DPR Endpoints to this bridge. NAT Masquerading or similar constructs must be used to provide outbound network connectivity to the DRP Endpoints.

Warning

The network-add-nat-bridge current NAT Masquerading mechanisms do not appear to correctly work reliably. This method requires additional testing and development.

21.36.8.3.54. proxmox/lab-drp-install-packages

A space separated list of packages to install on the remote DRP endpoint.

21.36.8.3.55. proxmox/lab-pvesh-extra-config-machines

Allows an operator to inject extra configuration directives in to the pvesh command that builds the target virtual machines.

21.36.8.4. profiles

The content package provides the following profiles.

21.36.8.4.1. proxmox-EXAMPLE-flat-profile

This is an EXAMPLE PROFILE for configuration of a Proxmox host. It defines a “flat topology” for the VMs, which will be named with drp in the VM name. No VMs will be built behind the DRP VMs. To use this as a starting point, clone the Profile and adjust the Param values to suit your needs. It is important you understand each of the Param configuration values defined in this profile. It may be necessary for the DRP Endpoint service to succesfully discover the newly provisioned VMs, to have a RackN license defined and applied at install time. To do so, set the following Param:

  • proxmox/lab-license-instructor-drp to contain the JSON license content pack (typically named rackn-license)

21.36.8.4.2. proxmox-EXAMPLE-lab-profile

This profile provides examples of the various Params that can be set to configure the Lab environment. These are the (generally) default configuration values as defined by each Param.

Clone this Profile, and set appropriate values for your environment, with your customizations.

The default lab install/setup workflow (proxmox-buster-install) utilizes the Flexiflow Stage that allows it to be dynamically customized, based on the values of the flexiflow/list-parameter Param. Adding one or more existing tasks to this Param will inject those tasks to be run during that stage.

21.36.8.4.3. proxmox-EXAMPLE-pkt-profile

This profile provides examples for PKT environment of various Params that can be set to configure the Lab environment. These are the (generally) default configuration values as defined by each Param.

Clone this Profile, and set appropriate values for your environment, with your customizations.

The default lab install/setup workflow (proxmox-buster-install) utilizes the Flexiflow Stage that allows it to be dynamically customized, based on the values of the flexiflow/list-parameter Param. Adding one or more existing tasks to this Param will inject those tasks to be run during that stage.

21.36.8.5. stages

The content package provides the following stages.

21.36.8.5.1. proxmox-buster-installer

This Stage does basic setup of the Proxmox VE repositories, sets some debconf selections for the Samba and Postfix packages, and then performs the Proxmox VE lateest stable version.

21.36.8.5.2. proxmox-drp-provision-drp

Provisions the OS on the DRP VMs, from the installed DRP on the Hypervisor.

21.36.8.5.3. proxmox-lab-drp-network

Sets up the DRP for external IP Forwarding and masquerading (nat), and the internal network for the virtual machines to connect to.

The initial setup is done using cloud-init per-once directive, as the DRP Endpoint is built using the image-deploy service with the embedded cloud-init.

21.36.8.5.4. proxmox-drp-destroy-drp

Destroys DRP service installed on the Hypervisor.

21.36.8.5.5. proxmox-drp-install

Installs DRP with an opinionated configuration on a DRP Endpoint.

21.36.8.5.6. proxmox-admin-account

Sets up the admin account in the PVE Realm with Admiministrator ACLs.

21.36.8.5.7. proxmox-create-storage

Allows for injecting custom storage creation tasks into the running workflow for setting up the Storage subsystems within a Proxmox node.

Set the Param proxmox/flexiflow-create-storage on the machine to a String array list of Tasks to execute. This gets set on the target Proxmox hypervisor(s) you are building.

An example:

proxmox/flexiflow-create-storage:
  - proxmox-storage-setup-dir
  - proxmox-storage-setup-lvmthin

Would specify setting up a Directory type storage provider, followed by setting up an LVM Thin pool for storage.

Note that each injected Task will have it’s own requirements on Param settings to control the configuration of that task.

Storage configurations are usually managed by use of the proxmox/storage-config Param.

21.36.8.5.8. proxmox-generate-ssh-key

Creates SSH keys and stores them in the proxmox/data-profile named profile.

21.36.8.5.9. flexiflow-buster-install

Allows for injecting custom tasks in to the proxmox-buster-install workflow before finishing the install.

Set the Param proxmox/flexiflow-buster-install on the machine to a String array list of Tasks to execute. This gets set on the target Proxmox hypervisor(s) you are building.

21.36.8.6. tasks

The content package provides the following tasks.

21.36.8.6.1. network-simple-single-bridge-with-nat

This network configuration uses the proxmox/lab-external-bridge to directly attach the Virtual Machines too, and assumes the IP addressing will for the VMs will be provided by proxmox/lab-external-subnet.

This creates a secondary IP interface on the bridge device. In addition, post-up rules will be added to NAT translate outbound traffic for the VMs.

To reach connect to the VMs, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

21.36.8.6.2. proxmox-lab-accounts

Adds the operator account and group operators, with PVEVMUser role and rights to the /vms resources.

21.36.8.6.3. proxmox-lab-createnodes

Set up the proxmox based lab virtual machines.

21.36.8.6.4. proxmox-lab-destroy-users

Nukes the installed users.

21.36.8.6.5. proxmox-storage-setup-dir

This task is injected in to a running workflow via the Stage proxmox-create-storage. You must set the Param proxmox/flexiflow-create-storage to include this task name for it to be added to the system.

This Task creates the dir storage to an existing Proxmox VE server if it doesn’t yet exist. Note that default Storage type is lvm-thin; which uses a full block device. This type allows use of a directory to back the VM and Container images.

The proxmox/storage-config Param defines the configuration to use for all storage types, including dir type.

An example configuraiton for this task:

proxmox/storage-config:
  dir:
    local-images:
      path: /var/lib/images
      content: images,iso
    local:
      path: /var/lib/vz
      content: iso,vztmpl,backup
    backup:
      path: /mnt/backup
      content: backup
      maxfiles: 7

This creates 3 directory structures for storing different content types in the existing filesystem. For documentation on configuration values, please see:

Config values in YAML/JSON stanzas match the Proxmox configuration values.

21.36.8.6.6. network-simple-bridge-with-addressing

This network configuration creates a bridge device on the hypervisor (typically vmbr0), which the DRP Endpoint Virtual Machines will be attached to.

An IP Subnet must be defined via the proxmox/lab-drp-external-subnet Param, and it will be allocated on the interface defined in the Param proxmox/lab-drp-external-bridge.

IF this method is used, you generally will have to either SSH forward to the Proxmox Hypervisor, install a VPN service of some sort on the Hypervisor, or arrange for your external Networking devices (routers/switches) to route this IP block to the addressable interface of the Proxmox Hypervisor.

The template defined in the Param network-simple-bridge-with-addressing-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.

21.36.8.6.7. proxmox-iso-modify

The Proxmox ISO is not installable via PXE by default. However, with a relatively simple modification, it can be PXE deployed. This task rebuilds the ISO as a Tar GZ (.tgz) which stages the unmodified ISO image as /proxmox.iso in the boot/ directory, along with the Kernel and InitRD pieces for PXE bootstrap.

21.36.8.6.8. proxmox-lab-drp-network

Set up DRP endpoint as an ip forwarding/masquerading gateway for the student lab machines. Internal network is also set up, which the student virtual machines will be connected to, that the DRP endpoint provides the IP Forwarding services for.

This is done using the cloud-inite per_once process.

Warning

This task requires/assumes CentOS as the base OS for the DRP Endpoints.

21.36.8.6.9. network-add-nat-bridge

This task creates a NAT bridge that will be attached to the proxmox/lab-drp-external-bridge defined bridge.

The NAT bridge will Masquerade for proxmox/lab-nat-subnet.

The template defined in the Param network-add-nat-bridge-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.

Warning

This method appears to not correctly NAT Masquerade traffic correctly. Verify the DRP Endpoints have external network connectivity with this method before relying on it. The post-up/down settings may need to be adjusted.

21.36.8.6.10. proxmox-buster-installer

This task sets up and installs latest stable Proxmox VE on top of an already installed Debian 10 (Buster) system. This can be run betweent the finsish-install and complete stage of the RackN provided debian-base` workflow.

This is also used in the proxmox-buster-installer Workflow which installs Debian 10 (Buster) first.

21.36.8.6.11. proxmox-create-storage

Create the local-lvm storage to an existing Proxmox VE server if it doesn’t yet exist.

21.36.8.6.12. proxmox-debconf-set-selections

This task provides the Debian Package preset configuration input values needed to ensure automated installation of the samba and postfix packages. It also allows the operator to pre-seed and package configurations for any installed value.

Set the proxmox/debconf-selections-template template to the name of your custom settings, which must conform to the debconf-set-selections structure.

The template will be saved on the Machine under /root/proxmox-debconf-set-selections and read in prior to package installation.

21.36.8.6.13. proxmox-drp-install

This is a very opinionated and quick DRP install on the Proxmox Hypervisor. Future iterations should utilize the Multi Site Manager to control the DRP endpoint.

21.36.8.6.14. proxmox-drp-provision-drp

Provisions the OS for the DRP VMs on the Proxmox host, via the DRP installed on the hypervisor.

21.36.8.6.15. proxmox-generate-ssh-key

This is a very opinionated and quick SSH Key generation task. It will build ed25519 elyptical curve Public and Private key halves.

The keys will be stored on the the profile specified by the Param proxmox/data-profile.

Once the lab is built, the operator can retrieve the Private Key half and use that in their ssh-agent, or as an ssh -i keyfile ... command line argument.

Warning

This task will overwrite the Param values, possibly losing the keys.

21.36.8.6.16. kvm-enable-nested

Determines if the machine is running Intel or AMD processors and sets up the nested virtualization capability for hypervisors to work inside virtual machines.

21.36.8.6.17. proxmox-admin-account

Adds the amdin account with Administrator role and rights to the / resources.

21.36.8.6.18. proxmox-drp-destroy-drp

Provisions the OS for the DRP VMs on the Proxmox host, via the DRP installed on the hypervisor.

21.36.8.6.19. proxmox-lab-destroy-all-vms

Completely nukes all found Virtual Machines on a proxmox node.

21.36.8.6.20. proxmox-lab-destroy-networks

Completely nukes all found bridge networks in the interfaces.d directory. Used as a cleanup task before re-running a build.

21.36.8.6.21. proxmox-lab-network

Set up the proxmox based lab network enviornment for the target VMs systems.

21.36.8.6.22. proxmox-storage-setup-lvmthin

This task is injected in to a running workflow via the Stage proxmox-create-storage. You must set the Param `` proxmox/flexiflow-create-storage`` to include this task name for it to be added to the system. Note that this task is defined as the default storage type to set up on the Proxmox node.

This Task creates the local-lvm storage to an existing Proxmox VE server if it doesn’t yet exist.

The proxmox/storage-config Param defines the configuration to use for all storage types. However, two (DEPRECATED) legacy params can be used for backwards compatibily configuration of this storage type. They are below.

The proxmox/storage-device Param controls the Block device (disk) that will be used to back the Virtual Machines on. It will take complete control of the block device. By default it will usee /dev/sdb if not otherwise specified.

The proxmox/storage-name Param defines the LVM Volume name that will be set. If the LVM Volume name exsts already, then the task will exit, assuming that the Volume has already been setup for use previously.

An example proxmox/storage-config configuraiton for this task:

proxmox/storage-config:
  lvmthin:
    local-lvm:
      name: local-lvm
      device: /dev/sdb
      vgname: pve
      thinpool: data
      content: rootdir,images
      size: 95%FREE

This creates 1 LVM Thinpool for storing different content types using the specified device and names. For documentation on configuration values, please see:

Config values in YAML/JSON stanzas match the Proxmox configuration values.

21.36.8.6.23. network-convert-interface-to-bridge

This task converts the systems Boot Interface to an bridge enslaved connection.

The DRP Endpoint VMs attach to the Bridge, and they will obtain an IP address either from DCHP or Static IP assignment from the same Layer 3 network that the Hypervisor utilizes.

The template defined in the Param network-convert-interface-to-bridge-template will be expanded in place in this script then rendered to the Hypervisor. This allows for in the field custom configurations that may not have been encompassed in the default Template configuration of this content pack.