Compute Node parameters for input JSON - Profiles

For profiles, following is an example:

"profiles": {
	"controller": {
		"type": "controller",
		"interfaces": {
		"ovs": [
				{
				"name": "br-data",
				"type": "bond",
				"physnet": "datacenter",
				"bond_members": [
								"nic3",
								"nic4"
					],
					"bond_linux_options": "mode=4 lacp_rate=1 updelay=1000 miimon=100"
				}					
			   ]
			}	
			},
	"osdcompute": {
		"type": "hci",
		"reserved_host_memory": "4096",
		"vcpu_pin_set": "1,2,3,4,5",
		"cpu_allocation_ratio": "1.0987",
		"isolated_core_list": "7,8,9,10",
		"hugepg_count": "50",
		"ceph-storage": {
		"disk_config": {
		"osd_objectstore": "bluestore",
		"osd_scenario": "lvm",
		"disk_type": {
				"osd": "nvme",
				"journal": "nvme"
						},
			"osds_per_device": "4"
						},
			"extra_config": {
			"osd_memory_target": "8388608"
						},
			"replication_size":
					"2",
			"default_pgnum": "32"
						},
			"interfaces": {
			"sriov": [
					{
					"name": "nic6",
					"physnet": "datacenter",
					"vf": "34"
					},
					{
					"name": "nic7",
					"physnet": "datacenter",
					"vf": "45"
					}
				]
			}
		}
	},
	"name": "RHOSP",
	"version": "13",
	"description": "Redhat OpenStack"
	}
],
"description": "Compute platforms and its configurations"
},
The following table defines each of the parameters:
Parameter Description
profiles Displays a list of custom profiles created. For example:
  • controller

  • osdcompute

  • computeovsdpdk

  • ceph-storage

  • computeovsdpdksriov

type Select any one of the following types of custom profile:
  • controller

  • compute

  • hci

  • ceph

NOTE:

In any profile, there can only be one controller type.

interfaces Select the interface:
  • sriov

  • ovs

  • dpdk

NOTE:

You can create any combination of interfaces, except ovs and dpdk in the same profile.

Custom Profile parameters - osdcompute
reserved_host_memory

Enter the reserved memory (in MB) for the tasks on the host.

NOTE:

The recommended value is 4096 MB for computes. For information about compute calculations, see Calculation of OVS-DPDK parameters.

The osdcompute or hci server calculations reserved host memory, see Calculation of reserved host memory and CPU allocation ratio for HCI servers.

vcpu_pin_set

Enter these cores for the guest instances in the compute node.

For calculation information, see Calculation of reserved host memory and CPU allocation ratio for HCI servers.

isolated_core_list

Enter the list of CPU cores isolated from the host processes.

For calculation information, see Calculation of OVS-DPDK parameters.

hugepg_count
Enter the count for the required number of 1Gb huge pages per compute node.
NOTE:

Ensure that the huge page count does not exceed the available RAM capacity per server, including the reserved memory value.

cpu_allocation_ratio Enter the parameter for HCI configuration.

For calculation information, see Calculation of reserved host memory and CPU allocation ratio for HCI servers.

(Optional) dpdk

Enter the parameter for the DPDK configuration.

NOTE:

If you want to create a profile with dpdk, the dpdk parameter is mandatory.

DPDK parameters
core_list

Enter the list of CPU cores that are used for nondata path DPDK processes, such as the handler and revalidator threads.

For calculation information, see Calculation of OVS-DPDK parameters.
pmd_core_list

Enter the list of CPU cores that are used for the DPDK poll mode drivers (PMD).

These CPU cores are associated with the local NUMA nodes of the DPDK interfaces. For more information to view the considerations when calculating this parameter, see Calculation of OVS-DPDK parameters.

socket_memory

Enter the memory allocated for each socket in HCI nodes. Specifies the amount of memory (in MB) to preallocate from the hugepage pool, per the NUMA node.

For calculation information, see Calculation of OVS-DPDK parameters.
memory_channels

Enter the number of memory channels to be used for the DPDK.

For calculation information, see Calculation of OVS-DPDK parameters.
DPDK interface configuration parameters
type Enter the type of interface either bonded or single interface. Values are bond or interface.
name Enter the interface name or bridge name based on the type of the interface.
rx_queue Enter the number of RX queues to be set per DPDK port

Default value: 1

You can edit the value as per your requirement.

physnet The provider network name configured for the interface.
bond_members Enter the members of the physical interface number. For example nic3, nic4.
NOTE:

When the type is bond this parameter and its value are needed.

bond_ovs_options
Enter the OVS options for the DPDK bond. The following are the supported modes:
bond_mode=[active-backup|balance-slb|balance-tcp] 
lacp=[active|passive]
NOTE:

When the type is bond and interface type is ovs, this parameter and its value are needed.

driver Enter the driver option for dpdk.
NOTE:

Driver option is required for Mellanox NIC cards to configure OVS DPDK.

SRIOV interface configuration parameters
name Enter the network interface name.
NOTE:

When the type is interface this parameter and its value are needed.

physnet The provider network name configured for the interface.
vf

Enter the number of virtual functions for an SR-IOV NIC.

NOTE:

The number of VF counts must not exceed maximum number of VFs supported by the vendor. The maximum number of VFs supported for Intel is 64 and for Mellanox is 8.

Controller parameters
name Enter the interface name or bridge name based on the type of the interface.
type Enter the type of interface either bonded or single interface. Values are bond or interface.
physnet The provider network name configured for the interface.
bond_members Enter the members of the physical interface number. For example nic3, nic4.
NOTE:

When the type is bond this parameter and its value are needed.

bond_linux_options The linux bonding options for the interface
NOTE:

When the type is bond this parameter and its value are needed.

Ceph parameters
ceph-storage Displays Ceph Storage related information.
disk_config Displays Ceph Storage disk configuration.
osd_objectstore

Enter the type of object store for Ceph Storage.

Choose one of the values:
  • bluestore

  • filestore

osd_scenario

Enter the storage back end for Ceph Storage.

The values are:
  • lvm for bluestore

  • collocated or non-collocated for filestore

disk_type Enter the type of disk used for OSD and journal.
osd Enter the type of the disk used to configure osd on Ceph. The values are HDD, SSD, and NVMe.
journal

Type of the disk used to configure osd on Ceph.

The values are SSD and NVMe.

osds_per_device
Enter the number of OSD configured per device on each Ceph node.
  • 1 for HDD

  • 2 for SSD

  • 4 for NVMe

osd_memory_target

Enter the memory resource of an OSD container.

If the osd disk type is HDD, then 4 GB of memory is allocated in OSD containers.

If the osd disk type is SSD, then 6 GB of memory is allocated in OSD containers.

If the osd disk type is NVMe, then 8 GB of memory is allocated in OSD containers.

replication_size

Enter the minimum replication size for RBD copies.

If the osd disk type is HDD, then replication size is 3.

If the osd disk type is SSD or NVMe, then replication size is 2.

default_pgnum

Enter the number of Placement Groups (PGs) used for the RBD pools.

This field is autopopulated, however you can customize it. Use standard Ceph PG calculator to calculate pgnum.