title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
Getting started with playbooks
Getting started with playbooks Red Hat Ansible Automation Platform 2.5 Get started with Ansible Playbooks Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_playbooks/index
2.10. Pre-Installation Script
2.10. Pre-Installation Script Figure 2.15. Pre-Installation Script You can add commands to run on the system immediately after the kickstart file has been parsed and before the installation begins. If you have configured the network in the kickstart file, the network is enabled before this section is processed. To include a pre-installation script, type it in the text area. To specify a scripting language to use to execute the script, select the Use an interpreter option and enter the interpreter in the text box beside it. For example, /usr/bin/python2.2 can be specified for a Python script. This option corresponds to using %pre --interpreter /usr/bin/python2.2 in your kickstart file. Warning Do not include the %pre command. It is added for you.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/RHKSTOOL-Pre_Installation_Script
Chapter 2. Configuring an Azure account
Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account to meet installation requirements. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 2.5. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 2.5.1. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5.2. Required Azure permissions for installer-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating a private storage endpoint for the image registry Microsoft.Network/privateEndpoints/write Microsoft.Network/privateEndpoints/read Microsoft.Network/privateEndpoints/privateDnsZoneGroups/write Microsoft.Network/privateEndpoints/privateDnsZoneGroups/read Microsoft.Network/privateDnsZones/join/action Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action Example 2.10. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.11. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.13. Optional permissions for installing a cluster using the NatGateway outbound type Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.14. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.15. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.16. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/retrieveBootDiagnosticsData/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.17. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.18. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.19. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.20. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.21. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.22. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.23. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.5.3. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 2.5.4. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources About the Cloud Credential Operator 2.6. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.7. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.8. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
[ "az login", "az account list --refresh", "[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id>", "az account show", "{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }", "az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-account
Introduction to the Migration Toolkit for Runtimes
Introduction to the Migration Toolkit for Runtimes Migration Toolkit for Runtimes 1.2 Learn how to use the Migration Toolkit for Runtimes to migrate and modernize Java applications and components. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/introduction_to_the_migration_toolkit_for_runtimes/index
Chapter 1. Preparing to install on Alibaba Cloud
Chapter 1. Preparing to install on Alibaba Cloud Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on Alibaba Cloud Before installing OpenShift Container Platform on Alibaba Cloud, you must configure and register your domain, create a Resource Access Management (RAM) user for the installation, and review the supported Alibaba Cloud data center regions and zones for the installation. 1.3. Registering and Configuring Alibaba Cloud Domain To install OpenShift Container Platform, the Alibaba Cloud account you use must have a dedicated public hosted zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Alibaba Cloud or another source. Note If you purchase a new domain through Alibaba Cloud, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through Alibaba Cloud, see Alibaba Cloud domains . If you are using an existing domain and registrar, migrate its DNS to Alibaba Cloud. See Domain name transfer in the Alibaba Cloud documentation. Configure DNS for your domain. This includes: Registering a generic domain name . Completing real-name verification for your domain name . Applying for an Internet Content Provider (ICP) filing . Enabling domain name resolution . Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you are using a subdomain, follow the procedures of your company to add its delegation records to the parent domain. 1.4. Supported Alibaba regions You can deploy an OpenShift Container Platform cluster to the regions listed in the Alibaba Regions and zones documentation . 1.5. steps Create the required Alibaba Cloud resources .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_alibaba/preparing-to-install-on-alibaba
Chapter 4. Installing a cluster on AWS with customizations
Chapter 4. Installing a cluster on AWS with customizations In OpenShift Container Platform version 4.15, you can install a customized cluster on infrastructure that the installation program provisions on Amazon Web Services (AWS). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the install-config.yaml file with this value before deploying the cluster. Sample install-config.yaml file with AWS Marketplace compute nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}' 1 The AMI ID from your AWS Marketplace subscription. 2 Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription. 4.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 4.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.6.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 4.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 4.6.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 4.2. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 4.6.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{"auths": ...}' 20 1 12 14 20 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 17 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 18 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 19 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 4.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 4.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 4.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 4.3. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 4.4. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 4.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 4.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 4.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 4.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 4.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 4.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 1 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 2 sshKey: ssh-ed25519 AAAA pullSecret: '{\"auths\": ...}'", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-0c5d3e03c0ab9b19a 16 serviceEndpoints: 17 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 18 sshKey: ssh-ed25519 AAAA... 19 pullSecret: '{\"auths\": ...}' 20", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/installing-aws-customizations
4.8.2. Modifying a Failover Domain
4.8.2. Modifying a Failover Domain To modify a failover domain, follow the steps in this section. From the cluster-specific page, you can configure Failover Domains for that cluster by clicking on Failover Domains along the top of the cluster display. This displays the failover domains that have been configured for this cluster. Click on the name of a failover domain. This displays the configuration page for that failover domain. To modify the Prioritized , Restricted , or No Failback properties for the failover domain, click or unclick the check box to the property and click Update Properties . To modify the failover domain membership, click or unclick the check box to the cluster member. If the failover domain is prioritized, you can also modify the priority setting for the cluster member. Click Update Settings .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-config-modify-failoverdm-conga-CA
3.6. Appendix - Setting up Red Hat Gluster Storage in Microsoft Azure in ASM Mode
3.6. Appendix - Setting up Red Hat Gluster Storage in Microsoft Azure in ASM Mode This section provides step-by-step instructions to set up Red Hat Gluster Storage in Microsoft Azure. 3.6.1. Obtaining Red Hat Gluster Storage for Microsoft Azure To download the Red Hat Gluster Storage Server files using a Red Hat Subscription or a Red Hat Evaluation Subscription: Visit the Red Hat Customer Service Portal at https://access.redhat.com/login and enter your user name and password to log in. Click Downloads to visit the Software & Download Center . In the Red Hat Gluster Storage Server area, click Download Software to download the latest version of the VHD image. Navigate to the directory where the file was downloaded and execute the sha256sum command on the file. For example, The value generated by the sha256sum utility must match the value displayed on the Red Hat Customer Portal for the file. If they are not the same, your download is either incomplete or corrupt, and you will need to download the file again. If the checksum is not successfully validated after several attempted downloads, contact Red Hat Support for assistance. Unzip the downloaded file rhgs-azure-[version].zip to extract the archive contents. For example, 3.6.2. Define the Network Topology By default, deploying an instance into a cloud service will pick up a dynamically assigned, internal IP address. This address may change and vary from site to site. For some configurations, consider defining one or more virtual networks within your account for instances to connect to. That establishes a networking configuration similar to an on-premise environment. To create a simple network: Create the cloud service for the Gluster Storage nodes. For example, cloudapp.net will be appended to the service name, and the full service name will be exposed directly to the Internet. In this case, rhgs313-cluster.cloudapp.net. Create a virtual network for the Gluster Storage nodes to connect to. In this example, the network is created within the East US location. This defines a network within a single region. Features like geo-replication within Gluster Storage require a vnet-to-vnet configuration. A vnet-to-vnet configuration connects virtual networks through VPN gateways. Each virtual network can be within the same region or across regions to address disaster recovery scenarios. Joining VPNs together requires a shared key, and it is not possible to pass a shared key through the Microsoft Azure CLI. To define a vnet-to-vnet configuration, use the Windows Powershell or use the Microsoft Azure REST API. 3.6.3. Upload the Disk Image to Microsoft Azure The disk image can be uploaded and used as a template for creating Gluster Storage nodes. Note Microsoft Azure commands must be issued from the local account configured to use the xplat-cli. To upload the image to Microsoft Azure, navigate to the directory where the VHD image is stored and run the following command: For example, Once complete, confirm the image is available: Note The output of an instance image list will show public images as well as images specific to your account (User), so awk is used to display only the images added under the Microsoft Azure account. 3.6.4. Deploy the Gluster Storage Instances Individual Gluster Storage instances in Microsoft Azure can be configured into a cluster. You must first create the instances from the prepared image and then attach the data disks. To create instances from the prepared image For example, Adding 1023 GB data disk to each of the instances. For example Perform the above steps of creating instances and attaching disks for all the instances Confirm that the instances have been properly created: A Microsoft Azure availability set provides a level of fault tolerance to the instances it holds, protecting against system failure or planned outages. This is achieved by ensuring instances within the same availability set are deployed across different fault and upgrade domains within a Microsoft Azure datacenter. When Gluster Storage replicates data between bricks, associate the replica sets to a specific availability set. By using availability sets in the replication design, incidents within the Microsoft Azure infrastructure cannot affect all members of a replica set simultaneously. Each instance is assigned a static IP ( -S ) within the rhgs- - virtual network and an endpoint added to the cloud service to allow SSH access ( --ssh port ). There are single quotation marks (') around the password to prevent bash interpretation issues. Example Following is the example for creating four instances from the prepared image. They are named rhgs31-n . Their IP address are 10.18.0.11 to 10.18.0.14. As the instances are created ( azure vm create ), they can be added to the same availability set ( --availability-set ). Add four 1023 GB data disks to each of the instances. Confirm that the instances have been properly created: Note This example uses static IP addresses, but this is not required. If you're creating a single Gluster Storage cluster and do not need features like geo-replication, it is possible to use the dynamic IPs automatically assigned by Microsoft Azure. The only important thing is that the Gluster Storage cluster is defined by name. 3.6.5. Configure the Gluster Storage Cluster Configure these instances to form a trusted storage pool (cluster). Note If you are using Red Hat Enterprise Linux 7 machines, log in to the Microsoft Azure portal and reset the password for the VMs and also restart the VMs. On Red Hat Enterprise Linux 6 machines, password reset is not required. Log into each node. Register each node to Red Hat Network using the subscription-manager command, and attach the relevant Red Hat Storage subscriptions. For information on subscribing to the Red Hat Gluster Storage 3.5 channels, see the Installing Red Hat Gluster Storage chapter in the Red Hat Gluster Storage 3.5 Installation Guide . Update each node to ensure the latest enhancements and patches are in place. Follow the instructions in the Adding Servers to the Trusted Storage Pool chapter in the Red Hat Gluster Storage Administration Guide to create the trusted storage pool.
[ "sha256sum rhgs-azure-3.5-rhel-7-x86_64.tar.gz 2d083222d6a3c531fa2fbbd21c9ea5b2c965d3b8f06eb8ff3b2b0efce173325d rhgs-azure-3.5-rhel-7-x86_64.tar.gz", "tar -xvzf rhgs-azure-3.5-rhel-7-x86_64.tar.gz", "azure service create --serviceName service_name --location location", "azure service create --serviceName rhgs313-cluster --location \"East US\" info: Executing command service create + Creating cloud service data: Cloud service name rhgs313-cluster info: service create command OK", "azure network vnet create --vnet \"rhgs313-vnet\" --location \"East US\" --address-space 10.18.0.0 --cidr 16 info: Executing command network vnet create info: Using default subnet start IP: 10.18.0.0 info: Using default subnet cidr: 19 + Looking up network configuration + Looking up locations + Setting network configuration info: network vnet create command OK", "azure vm image create image_name --location location --os linux VHD_image_name", "azure vm image create rhgs-3.1.3 --location \"East US\" --os linux rhgs313.vhd info: Executing command vm image create + Retrieving storage accounts info: VHD size : 20 GB info: Uploading 20973568.5 KB Requested:100.0% Completed:100.0% Running: 0 Time: 7m50s Speed: 3876 KB/s info: https://bauderhel7.blob.core.windows.net/vm-images/rhgs313.vhd was uploaded successfully info: vm image create command OK", "azure vm image list | awk 'USD3 == \"User\" {print USD2;}'", "azure vm create --vm-name vm_name --availability-set name_of_the_availability_set --vm-size size --virtual-network-name vnet_name --ssh port_number --connect cluster_name username_and_password", "azure vm create --vm-name rhgs313-1 --availability-set AS1 -S 10.18.0.11 --vm-size Medium --virtual-network-name rhgs313-vnet --ssh 50001 --connect rhgs313-cluster rhgs-3.1.3 rhgsuser 'AzureAdm1n!' info: Executing command vm create + Looking up image rhgs-313 + Looking up virtual network + Looking up cloud service + Getting cloud service properties + Looking up deployment + Creating VM info: OK info: vm create command OK", "azure vm disk attach-new VM_name 1023", "azure vm disk attach-new rhgs313-1 1023 info: Executing command vm disk attach-new + Getting virtual machines + Adding Data-Disk info: vm disk attach-new command OK", "azure vm list azure vm show vm-name", "for i in 1 2 3 4; do as=USD((i/3)); azure vm create --vm-name rhgs31-USDi --availability-set ASUSDas -S 10.18.0.1USDi --vm-size Medium --virtual-network-name rhgs-vnet --ssh 5000USDi --connect rhgs-cluster rhgs3.1 rhgsuser 'AzureAdm1n!'; done", "for node in 1 2 3 4; do for disk in 1 2 3 4; do azure vm disk attach-new rhgs31-USDnode 1023; done ; done", "azure vm list azure vm show vm-name", "ssh [email protected] -p 50001", "yum update" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/deployment_guide_for_public_cloud/chap-documentation-deployment_guide_for_public_cloud-azure-setting_up_rhgs_azure_asm
Chapter 1. Introducing configuration management by using Puppet
Chapter 1. Introducing configuration management by using Puppet You can use Puppet to manage and automate configurations of hosts. Puppet uses a declarative language to describe the desired state of hosts. Puppet increases your productivity as you can administer multiple hosts simultaneously. At the same time, it decreases your configuration effort as Puppet makes it easy to verify and possibly correct the state of the hosts. Additional resources Open Source Puppet documentation Puppet Forge - a repository of pre-built Puppet modules 1.1. How Puppet integrates with Satellite Puppet uses a server-agent architecture. The Puppet server is the central component that stores configuration definitions. Satellite Server or Capsule Servers are typically deployed with the Puppet server and Satellite acts as an External Node Classifier (ENC) for such Puppet server. Hosts run the Puppet agent that communicates with the Puppet server. The Puppet agent collects facts about a host and reports them to the Puppet server on each run. You can display the Puppet facts in JSON format by running puppet facts on a host. The Puppet server forwards facts to Satellite and Satellite stores them for later use. Based on the facts and other definitions, Satellite constructs the ENC answer to the Puppet server. The Puppet server compiles a catalog based on the ENC answer and sends the catalog to the Puppet agent. The Puppet agent evaluates the system state on the host. If the Puppet agent finds differences, known as drifts , between the desired state defined in the catalog and the actual state, it enforces correction of the state of the host. The Puppet agent then reports correction results back to the Puppet server, which reports them to Satellite. Puppet modules The desired state of a host is defined in a catalog . The catalog is compiled from Puppet manifests of one or more Puppet modules assigned to the host. A Puppet module is a collection of classes, manifests, resources, files, and templates. The Puppet modules work as components of host configuration definitions. Smart Class parameters You can override parameters of a Puppet module by using Smart Class parameters if the module supports the use of parameters. You can define the parameters in your Satellite as key-value pairs, which behave similar to host parameters or Ansible variables. Puppet environments You can also create multiple Puppet environments to control versions of configuration definitions or to manage variants of the definitions, and to test the definitions before you deploy them on production. High-level integration steps Puppet integration with Satellite involves the following high-level steps: Enable Puppet integration . Import Puppet agent packages into Satellite. Puppet agent packages can be managed like any other content with Satellite by enabling Red Hat Repositories and by using activation keys and content views . Install Puppet agent on hosts during provisioning , registration , manually , or by remote job execution. Additional resources Managing content Registering Hosts in the Managing Hosts Guide Configuring and Setting Up Remote Jobs in the Managing Hosts Guide The following procedures outline how to use a Puppet module to install, configure, and manage the ntp service to provide examples. 1.2. Supported Puppet versions and system requirements Before you begin with the Puppet integration, review the supported Puppet versions and system requirements. Supported Puppet Versions Satellite supports Puppet server 8. Ensure that the Puppet modules used to configure your hosts are compatible with your Puppet version. On hosts, you can use Puppet agent 7. System Requirements Before you begin integrating Puppet with your Satellite, ensure that you meet the system requirements. For more information, see System Requirements for Puppet 7 in the Open Source Puppet documentation. 1.3. Enabling Puppet integration with Satellite By default, Satellite does not have any Puppet integration configured. You need to enable the integration as is appropriate for your situation. This means that you can configure Satellite to manage and deploy Puppet server on Satellite Server or Capsule Servers. Additionally, you can deploy Puppet server to Satellite externally and integrate it with Satellite for reporting, facts, and external node classification (ENC). Procedure Enable Puppet integration and install Puppet server on Satellite Server: If you want to use Puppet integration on Capsule Servers, enable Puppet integration and install Puppet server on Capsule Servers: 1.4. Installing and configuring Puppet agent during host provisioning You can install and configure the Puppet agent on a host during the provisioning process. A configured Puppet agent is required on the host for Puppet integration with your Satellite. Prerequisites Puppet must be enabled in your Satellite. For more information, see Section 1.3, "Enabling Puppet integration with Satellite" . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . You have an activation key. For more information, see Managing Activation Keys in Managing content . Procedure Navigate to Hosts > Templates > Provisioning Templates . Select a provisioning template depending on your host provisioning method. For more information, see Kinds of Provisioning Templates in Provisioning hosts . Ensure the puppet_setup snippet is included as follows: Note that this snippet is already included in the templates shipped with Satellite, such as Kickstart default or Preseed default . Enable the Puppet agent using a host parameter in global parameters, a host group, or for a single host. To use Puppet 8, add a host parameter named enable-puppet8 , select the boolean type, and set the value to true . To use Puppet 7, add a host parameter named enable-puppet7 , select the boolean type, and set the value to true . Set configuration for the Puppet agent. If you use an integrated Puppet server, ensure that you select a Puppet Capsule, Puppet CA Capsule, and Puppet environment when you create a host. If you use a non-integrated Puppet server, either set the following host parameters in global parameters, or a host group, or when you create a host: Add a host parameter named puppet_server , select the string type, and set the value to the hostname of your Puppet server, such as puppet.example.com . Optional: Add a host parameter named puppet_ca_server , select the string type, and set the value to the hostname of your Puppet CA server, such as puppet-ca.example.com . If puppet_ca_server is not set, the Puppet agent will use the same server as puppet_server . Optional: Add a host parameter named puppet_environment , select the string type, and set the value to the Puppet environment you want the host to use. Ensure your host has access to the Puppet agent packages from Satellite Server by using an appropriate activation key. 1.5. Installing and configuring Puppet agent during host registration You can install and configure the Puppet agent on the host during registration. A configured Puppet agent is required on the host for Puppet integration with your Satellite. Prerequisites Puppet must be enabled in your Satellite. For more information, see Section 1.3, "Enabling Puppet integration with Satellite" . Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server and enabled in the activation key you use. For more information, see Importing Content in Managing content . You have an activation key. For more information, see Managing Activation Keys in Managing content . Procedure In the Satellite web UI, navigate to Configure > Global Parameters to add host parameters globally. Alternatively, you can navigate to Configure > Host Groups and edit or create a host group to add host parameters only to a host group. Enable the Puppet agent using a host parameter in global parameters or a host group. Add a host parameter named enable-puppet7 , select the boolean type, and set the value to true . Specify configuration for the Puppet agent using the following host parameters in global parameters or a host group: Add a host parameter named puppet_server , select the string type, and set the value to the hostname of your Puppet server, such as puppet.example.com . Optional: Add a host parameter named puppet_ca_server , select the string type, and set the value to the hostname of your Puppet CA server, such as puppet-ca.example.com . If puppet_ca_server is not set, the Puppet agent will use the same server as puppet_server . Optional: Add a host parameter named puppet_environment , select the string type, and set the value to the Puppet environment you want the host to use. Until the BZ2177730 is resolved, you must use host parameters to specify the Puppet agent configuration even in integrated setups where the Puppet server is a Capsule Server. Navigate to Hosts > Register Host and register your host using an appropriate activation key. For more information, see Registering Hosts in Managing hosts . Navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. 1.6. Installing and configuring Puppet agent manually You can install and configure the Puppet agent on a host manually. A configured Puppet agent is required on the host for Puppet integration with your Satellite. Prerequisites Puppet must be enabled in your Satellite. For more information, see Section 1.3, "Enabling Puppet integration with Satellite" . The host must have a Puppet environment assigned to it. Red Hat Satellite Client 6 repository for the operating system version of the host is synchronized on Satellite Server, available in the content view and the lifecycle environment of the host, and enabled for the host. For more information, see Changing the repository sets status for a host in Satellite in Managing content . Procedure Log in to the host as the root user. Install the Puppet agent package. On hosts running Red Hat Enterprise Linux 8 and above: On hosts running Red Hat Enterprise Linux 7 and below: Add the Puppet agent to PATH in your current shell using the following script: Configure the Puppet agent. Set the environment parameter to the name of the Puppet environment to which the host belongs: Start the Puppet agent service: Create a certificate for the host: In the Satellite web UI, navigate to Infrastructure > Capsules . From the list in the Actions column for the required Capsule Server, select Certificates . Click Sign to the right of the required host to sign the SSL certificate for the Puppet agent. On the host, run the Puppet agent again: 1.7. Performing configuration management After you deploy Puppet agent on a host, you can start performing configuration management with Puppet. This involves the following high-level steps: Managing Puppet modules on the Puppet server, that is installing and updating them. Importing Puppet classes and environments from Puppet modules into Satellite. Optional: Creating config groups from Puppet classes. Configuring overrides of Smart Class parameters on various levels. Assigning Puppet classes or config groups to host groups or individual hosts. Configuring intervals for runs of the Puppet agent on hosts and for configuration enforcement runs of the Puppet server. Monitoring configuration management using reports in the Satellite web UI. For more information, see Monitoring Resources in Administering Red Hat Satellite . Configuring email notifications. For more information, see Configuring Email Notification Preferences in Administering Red Hat Satellite . After assigning Puppet classes or config groups, Satellite runs configuration management automatically in the configured intervals to enforce Puppet configuration on your hosts, or you can initiate it manually on demand with the Run Puppet Once feature. For more information, see Section 9.1, "Running Puppet once using SSH" . 1.8. Disabling Puppet integration with Satellite To discontinue using Puppet in your Satellite, follow this procedure. Note that the command without the --remove-all-data argument removes all Puppet-related data in Satellite database. With the --remove-all-data argument, the command additionally removes Puppet server data files, including Puppet environments. Warning If you disable Puppet with the --remove-all-data argument, you will not be able to re-enable Puppet afterwards. This is a known issue, see the Bug 2087067 . Prerequisites Puppet is enabled on Satellite. Procedure If you have used Puppet server on any Capsules, disable Puppet server on all Capsules: Disable Puppet server on Satellite Server:
[ "satellite-installer --enable-foreman-cli-puppet --enable-foreman-plugin-puppet --enable-puppet --foreman-proxy-puppet true --foreman-proxy-puppetca true --puppet-server true", "satellite-installer --enable-puppet --foreman-proxy-puppet true --foreman-proxy-puppetca true --puppet-server true", "<%= snippet 'puppet_setup' %>", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "satellite-maintain plugin purge-puppet --remove-all-data", "satellite-maintain plugin purge-puppet --remove-all-data" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/introducing-configuration-management-by-using-puppet
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/composing_installing_and_managing_rhel_for_edge_images/proc_providing-feedback-on-red-hat-documentation_composing-installing-managing-rhel-for-edge-images
Preface
Preface Once you have deployed a Red Hat Quay registry, there are many ways you can further configure and manage that deployment. Topics covered here include: Advanced Red Hat Quay configuration Setting notifications to alert you of a new Red Hat Quay release Securing connections with SSL/TLS certificates Directing action logs storage to Elasticsearch Configuring image security scanning with Clair Scan pod images with the Container Security Operator Integrate Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator Mirroring images with repository mirroring Sharing Red Hat Quay images with a BitTorrent service Authenticating users with LDAP Enabling Quay for Prometheus and Grafana metrics Setting up geo-replication Troubleshooting Red Hat Quay For a complete list of Red Hat Quay configuration fields, see the Configure Red Hat Quay page.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/pr01
Chapter 5. Rebooting the overcloud
Chapter 5. Rebooting the overcloud After a minor Red Hat OpenStack version update, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates may provide performance and security benefits. Plan downtime to perform the following reboot procedures. 5.1. Rebooting Controller and composable nodes Complete the following steps to reboot Controller nodes and standalone nodes based on composable roles, excluding Compute nodes and Ceph Storage nodes. Procedure Log in to the node that you want to reboot. Optional: If the node uses Pacemaker resources, stop the cluster: Reboot the node: Wait until the node boots. Check the services. For example: If the node uses Pacemaker services, check that the node has rejoined the cluster: If the node uses Systemd services, check that all services are enabled: If the node uses containerized services, check that all containers on the node are active: 5.2. Rebooting a Ceph Storage (OSD) cluster Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes. Procedure Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily: Select the first Ceph Storage node that you want to reboot and log in to the node. Reboot the node: Wait until the node boots. Log in to the node and check the cluster status: Check that the pgmap reports all pgs as normal ( active+clean ). Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes. When complete, log into a Ceph MON or Controller node and re-enable cluster rebalancing: Perform a final status check to verify that the cluster reports HEALTH_OK : 5.3. Rebooting Compute nodes Complete the following steps to reboot Compute nodes. To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, this procedure also includes instructions about migrating instances from the Compute node that you want to reboot. This involves the following workflow: Decide whether to migrate instances to another Compute node before rebooting the node. Select and disable the Compute node you want to reboot so that it does not provision new instances. Migrate the instances to another Compute node. Reboot the empty Compute node. Enable the empty Compute node. Prerequisites Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting. If for some reason you cannot or do not want to migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots: NovaResumeGuestsStateOnHostBoot Determines whether to return instances to the same state on the Compute node after reboot. When set to False , the instances will remain down and you must start them manually. Default value is: False NovaResumeGuestsShutdownTimeout Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0 . Default value is: 300 NovaResumeGuestsShutdownTimeout Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to 0 . Default value is: 300 For more information about overcloud parameters and their usage, see Overcloud Parameters . Procedure Log in to the undercloud as the stack user. List all Compute nodes and their UUIDs: Identify the UUID of the Compute node that you want to reboot. From the undercloud, select a Compute node. Disable the node: List all instances on the Compute node: If you decide not to migrate instances, skip to this step . If you decide to migrate the instances to another Compute node, use one of the following commands: Migrate the instance to a different host: Let nova-scheduler automatically select the target host: Live migrate all instances at once: Note The nova command might cause some deprecation warnings, which are safe to ignore. Wait until migration completes. Confirm that the migration was successful: Continue to migrate instances until none remain on the chosen Compute node. Log in to the Compute node and reboot the node: Wait until the node boots. Re-enable the Compute node: Check that the Compute node is enabled: 5.4. Rebooting HCI Compute nodes The following procedure reboots Compute hyperconverged infrastructure (HCI) nodes. Procedure Log in to a Ceph MON or Controller node and disable Ceph Storage cluster rebalancing temporarily: Log in to the undercloud as the stack user. List all Compute nodes and their UUIDs: Identify the UUID of the Compute node you aim to reboot. From the undercloud, select a Compute node and disable it: List all instances on the Compute node: Use one of the following commands to migrate your instances: Migrate the instance to a specific host of your choice: Let nova-scheduler automatically select the target host: Live migrate all instances at once: Note The nova command might cause some deprecation warnings, which are safe to ignore. Wait until the migration completes. Confirm that the migration was successful: Continue migrating instances until none remain on the chosen Compute node. Log in to a Ceph MON or a Controller node and check the cluster status: Check that the pgmap reports all pgs as normal ( active+clean ). Reboot the Compute HCI node: Wait until the node boots. Enable the Compute node again: Verify that the Compute node is enabled: Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes. When complete, log in to a Ceph MON or Controller node and enable cluster rebalancing again: Perform a final status check to verify the cluster reports HEALTH_OK :
[ "[heat-admin@overcloud-controller-0 ~]USD sudo pcs cluster stop", "[heat-admin@overcloud-controller-0 ~]USD sudo reboot", "[heat-admin@overcloud-controller-0 ~]USD sudo pcs status", "[heat-admin@overcloud-controller-0 ~]USD sudo systemctl status", "[heat-admin@overcloud-controller-0 ~]USD sudo podman ps", "sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance", "sudo reboot", "sudo podman exec -it ceph-mon-controller-0 ceph status", "sudo podman exec -it ceph-mon-controller-0 ceph osd unset noout sudo podman exec -it ceph-mon-controller-0 ceph osd unset norebalance", "sudo podman exec -it ceph-mon-controller-0 ceph status", "source ~/stackrc (undercloud) USD openstack server list --name compute", "source ~/overcloudrc (overcloud) USD openstack compute service list (overcloud) USD openstack compute service set [hostname] nova-compute --disable", "(overcloud) USD openstack server list --host [hostname] --all-projects", "(overcloud) USD openstack server migrate [instance-id] --live [target-host]--wait", "(overcloud) USD nova live-migration [instance-id]", "nova host-evacuate-live [hostname]", "(overcloud) USD openstack server list --host [hostname] --all-projects", "[heat-admin@overcloud-compute-0 ~]USD sudo reboot", "source ~/overcloudrc (overcloud) USD openstack compute service set [hostname] nova-compute --enable", "(overcloud) USD openstack compute service list", "sudo podman exec -it ceph-mon-controller-0 ceph osd set noout sudo podman exec -it ceph-mon-controller-0 ceph osd set norebalance", "source ~/stackrc (undercloud) USD openstack server list --name compute", "source ~/overcloudrc (overcloud) USD openstack compute service list (overcloud) USD openstack compute service set [hostname] nova-compute --disable", "(overcloud) USD openstack server list --host [hostname] --all-projects", "(overcloud) USD openstack server migrate [instance-id] --live [target-host]--wait", "(overcloud) USD nova live-migration [instance-id]", "nova host-evacuate-live [hostname]", "(overcloud) USD openstack server list --host [hostname] --all-projects", "sudo podman exec USDCEPH_MON_CONTAINER ceph -s", "sudo reboot", "source ~/overcloudrc (overcloud) USD openstack compute service set [hostname] nova-compute --enable", "(overcloud) USD openstack compute service list", "sudo podman exec USDCEPH_MON_CONTAINER ceph osd unset noout sudo podman exec USDCEPH_MON_CONTAINER ceph osd unset norebalance", "sudo podman exec USDCEPH_MON_CONTAINER ceph status" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/rebooting-the-overcloud
Preface
Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 2.4 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_sidecar_containers_on_cryostat/preface-cryostat
Chapter 3. Bare metal configuration
Chapter 3. Bare metal configuration When deploying OpenShift Container Platform on bare metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host's hardware, firmware, and firmware details. It can also include formatting disks or changing modifiable firmware settings. 3.1. About the Bare Metal Operator Use the Bare Metal Operator (BMO) to provision, manage, and inspect bare-metal hosts in your cluster. The BMO uses three resources to complete these tasks: BareMetalHost HostFirmwareSettings FirmwareSchema The BMO maintains an inventory of the physical hosts in the cluster by mapping each bare-metal host to an instance of the BareMetalHost custom resource definition. Each BareMetalHost resource features hardware, software, and firmware details. The BMO continually inspects the bare-metal hosts in the cluster to ensure each BareMetalHost resource accurately details the components of the corresponding host. The BMO also uses the HostFirmwareSettings resource and the FirmwareSchema resource to detail firmware specifications for the bare-metal host. The BMO interfaces with bare-metal hosts in the cluster by using the Ironic API service. The Ironic service uses the Baseboard Management Controller (BMC) on the host to interface with the machine. Some common tasks you can complete by using the BMO include the following: Provision bare-metal hosts to the cluster with a specific image Format a host's disk contents before provisioning or after deprovisioning Turn on or off a host Change firmware settings View the host's hardware details 3.1.1. Bare Metal Operator architecture The Bare Metal Operator (BMO) uses three resources to provision, manage, and inspect bare-metal hosts in your cluster. The following diagram illustrates the architecture of these resources: BareMetalHost The BareMetalHost resource defines a physical host and its properties. When you provision a bare-metal host to the cluster, you must define a BareMetalHost resource for that host. For ongoing management of the host, you can inspect the information in the BareMetalHost or update this information. The BareMetalHost resource features provisioning information such as the following: Deployment specifications such as the operating system boot image or the custom RAM disk Provisioning state Baseboard Management Controller (BMC) address Desired power state The BareMetalHost resource features hardware information such as the following: Number of CPUs MAC address of a NIC Size of the host's storage device Current power state HostFirmwareSettings You can use the HostFirmwareSettings resource to retrieve and manage the firmware settings for a host. When a host moves to the Available state, the Ironic service reads the host's firmware settings and creates the HostFirmwareSettings resource. There is a one-to-one mapping between the BareMetalHost resource and the HostFirmwareSettings resource. You can use the HostFirmwareSettings resource to inspect the firmware specifications for a host or to update a host's firmware specifications. Note You must adhere to the schema specific to the vendor firmware when you edit the spec field of the HostFirmwareSettings resource. This schema is defined in the read-only FirmwareSchema resource. FirmwareSchema Firmware settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each firmware setting on each host model. The data comes directly from the BMC by using the Ironic service. The FirmwareSchema resource enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource. A FirmwareSchema resource can apply to many BareMetalHost resources if the schema is the same. Additional resources Metal3 API service for provisioning bare-metal hosts Ironic API service for managing bare-metal infrastructure 3.2. About the BareMetalHost resource Metal 3 introduces the concept of the BareMetalHost resource, which defines a physical host and its properties. The BareMetalHost resource contains two sections: The BareMetalHost spec The BareMetalHost status 3.2.1. The BareMetalHost spec The spec section of the BareMetalHost resource defines the desired state of the host. Table 3.1. BareMetalHost spec Parameters Description automatedCleaningMode An interface to enable or disable automated cleaning during provisioning and de-provisioning. When set to disabled , it skips automated cleaning. When set to metadata , automated cleaning is enabled. The default setting is metadata . The bmc configuration setting contains the connection information for the baseboard management controller (BMC) on the host. The fields are: address : The URL for communicating with the host's BMC controller. credentialsName : A reference to a secret containing the username and password for the BMC. disableCertificateVerification : A boolean to skip certificate validation when set to true . bootMACAddress The MAC address of the NIC used for provisioning the host. bootMode The boot mode of the host. It defaults to UEFI , but it can also be set to legacy for BIOS boot, or UEFISecureBoot . consumerRef A reference to another resource that is using the host. It could be empty if another resource is not currently using the host. For example, a Machine resource might use the host when the machine-api is using the host. description A human-provided string to help identify the host. externallyProvisioned A boolean indicating whether the host provisioning and deprovisioning are managed externally. When set: Power status can still be managed using the online field. Hardware inventory will be monitored, but no provisioning or deprovisioning operations are performed on the host. firmware Contains information about the BIOS configuration of bare metal hosts. Currently, firmware is only supported by iRMC, iDRAC, iLO4 and iLO5 BMCs. The sub fields are: simultaneousMultithreadingEnabled : Allows a single physical processor core to appear as several logical processors. Valid settings are true or false . sriovEnabled : SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. Valid settings are true or false . virtualizationEnabled : Supports the virtualization of platform hardware. Valid settings are true or false . The image configuration setting holds the details for the image to be deployed on the host. Ironic requires the image fields. However, when the externallyProvisioned configuration setting is set to true and the external management doesn't require power control, the fields can be empty. The fields are: url : The URL of an image to deploy to the host. checksum : The actual checksum or a URL to a file containing the checksum for the image at image.url . checksumType : You can specify checksum algorithms. Currently image.checksumType only supports md5 , sha256 , and sha512 . The default checksum type is md5 . format : This is the disk format of the image. It can be one of raw , qcow2 , vdi , vmdk , live-iso or be left unset. Setting it to raw enables raw image streaming in the Ironic agent for that image. Setting it to live-iso enables iso images to live boot without deploying to disk, and it ignores the checksum fields. networkData A reference to the secret containing the network configuration data and its namespace, so that it can be attached to the host before the host boots to set up the network. online A boolean indicating whether the host should be powered on ( true ) or off ( false ). Changing this value will trigger a change in the power state of the physical host. (Optional) Contains the information about the RAID configuration for bare metal hosts. If not specified, it retains the current configuration. Note OpenShift Container Platform 4.13 supports hardware RAID for BMCs using the iRMC protocol only. OpenShift Container Platform 4.13 does not support software RAID. See the following configuration settings: hardwareRAIDVolumes : Contains the list of logical drives for hardware RAID, and defines the desired volume configuration in the hardware RAID. If you don't specify rootDeviceHints , the first volume is the root volume. The sub-fields are: level : The RAID level for the logical drive. The following levels are supported: 0 , 1 , 2 , 5 , 6 , 1+0 , 5+0 , 6+0 . name : The name of the volume as a string. It should be unique within the server. If not specified, the volume name will be auto-generated. numberOfPhysicalDisks : The number of physical drives as an integer to use for the logical drove. Defaults to the minimum number of disk drives required for the particular RAID level. physicalDisks : The list of names of physical disk drives as a string. This is an optional field. If specified, the controller field must be specified too. controller : (Optional) The name of the RAID controller as a string to use in the hardware RAID volume. rotational : If set to true , it will only select rotational disk drives. If set to false , it will only select solid-state and NVMe drives. If not set, it selects any drive types, which is the default behavior. sizeGibibytes : The size of the logical drive as an integer to create in GiB. If unspecified or set to 0 , it will use the maximum capacity of physical drive for the logical drive. softwareRAIDVolumes : OpenShift Container Platform 4.13 does not support software RAID. The following information is for reference only. This configuration contains the list of logical disks for software RAID. If you don't specify rootDeviceHints , the first volume is the root volume. If you set HardwareRAIDVolumes , this item will be invalid. Software RAIDs will always be deleted. The number of created software RAID devices must be 1 or 2 . If there is only one software RAID device, it must be RAID-1 . If there are two RAID devices, the first device must be RAID-1 , while the RAID level for the second device can be 0 , 1 , or 1+0 . The first RAID device will be the deployment device. Therefore, enforcing RAID-1 reduces the risk of a non-booting node in case of a device failure. The softwareRAIDVolume field defines the desired configuration of the volume in the software RAID. The sub-fields are: level : The RAID level for the logical drive. The following levels are supported: 0 , 1 , 1+0 . physicalDisks : A list of device hints. The number of items should be greater than or equal to 2 . sizeGibibytes : The size of the logical disk drive as an integer to be created in GiB. If unspecified or set to 0 , it will use the maximum capacity of physical drive for logical drive. You can set the hardwareRAIDVolume as an empty slice to clear the hardware RAID configuration. For example: If you receive an error message indicating that the driver does not support RAID, set the raid , hardwareRAIDVolumes or softwareRAIDVolumes to nil. You might need to ensure the host has a RAID controller. The rootDeviceHints parameter enables provisioning of the RHCOS image to a particular device. It examines the devices in the order it discovers them, and compares the discovered values with the hint values. It uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints to get selected. The fields are: deviceName : A string containing a Linux device name like /dev/vda . The hint must match the actual value exactly. hctl : A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model : A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor : A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber : A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes : An integer representing the minimum size of the device in gigabytes. wwn : A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension : A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension : A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational : A boolean indicating whether the device should be a rotating disk (true) or not (false). 3.2.2. The BareMetalHost status The BareMetalHost status represents the host's current state, and includes tested credentials, current hardware details, and other information. Table 3.2. BareMetalHost status Parameters Description goodCredentials A reference to the secret and its namespace holding the last set of baseboard management controller (BMC) credentials the system was able to validate as working. errorMessage Details of the last error reported by the provisioning backend, if any. errorType Indicates the class of problem that has caused the host to enter an error state. The error types are: provisioned registration error : Occurs when the controller is unable to re-register an already provisioned host. registration error : Occurs when the controller is unable to connect to the host's baseboard management controller. inspection error : Occurs when an attempt to obtain hardware details from the host fails. preparation error : Occurs when cleaning fails. provisioning error : Occurs when the controller fails to provision or deprovision the host. power management error : Occurs when the controller is unable to modify the power state of the host. detach error : Occurs when the controller is unable to detatch the host from the provisioner. The hardware.cpu field details of the CPU(s) in the system. The fields include: arch : The architecture of the CPU. model : The CPU model as a string. clockMegahertz : The speed in MHz of the CPU. flags : The list of CPU flags. For example, 'mmx','sse','sse2','vmx' etc. count : The number of CPUs available in the system. Contains BIOS firmware information. For example, the hardware vendor and version. The hardware.nics field contains a list of network interfaces for the host. The fields include: ip : The IP address of the NIC, if one was assigned when the discovery agent ran. name : A string identifying the network device. For example, nic-1 . mac : The MAC address of the NIC. speedGbps : The speed of the device in Gbps. vlans : A list holding all the VLANs available for this NIC. vlanId : The untagged VLAN ID. pxe : Whether the NIC is able to boot using PXE. The host's amount of memory in Mebibytes (MiB). The hardware.storage field contains a list of storage devices available to the host. The fields include: name : A string identifying the storage device. For example, disk 1 (boot) . rotational : Indicates whether the disk is rotational, and returns either true or false . sizeBytes : The size of the storage device. serialNumber : The device's serial number. Contains information about the host's manufacturer , the productName , and the serialNumber . lastUpdated The timestamp of the last time the status of the host was updated. operationalStatus The status of the server. The status is one of the following: OK : Indicates all the details for the host are known, correctly configured, working, and manageable. discovered : Implies some of the host's details are either not working correctly or missing. For example, the BMC address is known but the login credentials are not. error : Indicates the system found some sort of irrecoverable error. Refer to the errorMessage field in the status section for more details. delayed : Indicates that provisioning is delayed to limit simultaneous provisioning of multiple hosts. detached : Indicates the host is marked unmanaged . poweredOn Boolean indicating whether the host is powered on. The provisioning field contains values related to deploying an image to the host. The sub-fields include: state : The current state of any ongoing provisioning operation. The states include: <empty string> : There is no provisioning happening at the moment. unmanaged : There is insufficient information available to register the host. registering : The agent is checking the host's BMC details. match profile : The agent is comparing the discovered hardware details on the host against known profiles. available : The host is available for provisioning. This state was previously known as ready . preparing : The existing configuration will be removed, and the new configuration will be set on the host. provisioning : The provisioner is writing an image to the host's storage. provisioned : The provisioner wrote an image to the host's storage. externally provisioned : Metal 3 does not manage the image on the host. deprovisioning : The provisioner is wiping the image from the host's storage. inspecting : The agent is collecting hardware details for the host. deleting : The agent is deleting the from the cluster. id : The unique identifier for the service in the underlying provisioning tool. image : The image most recently provisioned to the host. raid : The list of hardware or software RAID volumes recently set. firmware : The BIOS configuration for the bare metal server. rootDeviceHints : The root device selection instructions used for the most recent provisioning operation. triedCredentials A reference to the secret and its namespace holding the last set of BMC credentials that were sent to the provisioning backend. 3.3. Getting the BareMetalHost resource The BareMetalHost resource contains the properties of a physical host. You must get the BareMetalHost resource for a physical host to review its properties. Procedure Get the list of BareMetalHost resources: USD oc get bmh -n openshift-machine-api -o yaml Note You can use baremetalhost as the long form of bmh with oc get command. Get the list of hosts: USD oc get bmh -n openshift-machine-api Get the BareMetalHost resource for a specific host: USD oc get bmh <host_name> -n openshift-machine-api -o yaml Where <host_name> is the name of the host. Example output apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: "2022-06-16T10:48:33Z" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: "30099" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: "" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: "16120" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: "0:0:0:0" model: VK000960GWTTB name: /dev/disk/by-id/scsi-<serial_number> sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 lastUpdated: "2022-06-16T11:41:42Z" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: "" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: "16120" 3.4. About the HostFirmwareSettings resource You can use the HostFirmwareSettings resource to retrieve and manage the BIOS settings for a host. When a host moves to the Available state, Ironic reads the host's BIOS settings and creates the HostFirmwareSettings resource. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware field in the BareMetalHost resource returns three vendor-independent fields, the HostFirmwareSettings resource typically comprises many BIOS settings of vendor-specific fields per host. The HostFirmwareSettings resource contains two sections: The HostFirmwareSettings spec. The HostFirmwareSettings status. 3.4.1. The HostFirmwareSettings spec The spec section of the HostFirmwareSettings resource defines the desired state of the host's BIOS, and it is empty by default. Ironic uses the settings in the spec.settings section to update the baseboard management controller (BMC) when the host is in the Preparing state. Use the FirmwareSchema resource to ensure that you do not send invalid name/value pairs to hosts. See "About the FirmwareSchema resource" for additional details. Example spec: settings: ProcTurboMode: Disabled 1 1 In the foregoing example, the spec.settings section contains a name/value pair that will set the ProcTurboMode BIOS setting to Disabled . Note Integer parameters listed in the status section appear as strings. For example, "1" . When setting integers in the spec.settings section, the values should be set as integers without quotes. For example, 1 . 3.4.2. The HostFirmwareSettings status The status represents the current state of the host's BIOS. Table 3.3. HostFirmwareSettings Parameters Description The conditions field contains a list of state changes. The sub-fields include: lastTransitionTime : The last time the state changed. message : A description of the state change. observedGeneration : The current generation of the status . If metadata.generation and this field are not the same, the status.conditions might be out of date. reason : The reason for the state change. status : The status of the state change. The status can be True , False or Unknown . type : The type of state change. The types are Valid and ChangeDetected . The FirmwareSchema for the firmware settings. The fields include: name : The name or unique identifier referencing the schema. namespace : The namespace where the schema is stored. lastUpdated : The last time the resource was updated. The settings field contains a list of name/value pairs of a host's current BIOS settings. 3.5. Getting the HostFirmwareSettings resource The HostFirmwareSettings resource contains the vendor-specific BIOS properties of a physical host. You must get the HostFirmwareSettings resource for a physical host to review its BIOS properties. Procedure Get the detailed list of HostFirmwareSettings resources: USD oc get hfs -n openshift-machine-api -o yaml Note You can use hostfirmwaresettings as the long form of hfs with the oc get command. Get the list of HostFirmwareSettings resources: USD oc get hfs -n openshift-machine-api Get the HostFirmwareSettings resource for a particular host USD oc get hfs <host_name> -n openshift-machine-api -o yaml Where <host_name> is the name of the host. 3.6. Editing the HostFirmwareSettings resource You can edit the HostFirmwareSettings of provisioned hosts. Important You can only edit hosts when they are in the provisioned state, excluding read-only values. You cannot edit hosts in the externally provisioned state. Procedure Get the list of HostFirmwareSettings resources: USD oc get hfs -n openshift-machine-api Edit a host's HostFirmwareSettings resource: USD oc edit hfs <host_name> -n openshift-machine-api Where <host_name> is the name of a provisioned host. The HostFirmwareSettings resource will open in the default editor for your terminal. Add name/value pairs to the spec.settings section: Example spec: settings: name: value 1 1 Use the FirmwareSchema resource to identify the available settings for the host. You cannot set values that are read-only. Save the changes and exit the editor. Get the host's machine name: USD oc get bmh <host_name> -n openshift-machine name Where <host_name> is the name of the host. The machine name appears under the CONSUMER field. Annotate the machine to delete it from the machineset: USD oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api Where <machine_name> is the name of the machine to delete. Get a list of nodes and count the number of worker nodes: USD oc get nodes Get the machineset: USD oc get machinesets -n openshift-machine-api Scale the machineset: USD oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1> Where <machineset_name> is the name of the machineset and <n-1> is the decremented number of worker nodes. When the host enters the Available state, scale up the machineset to make the HostFirmwareSettings resource changes take effect: USD oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n> Where <machineset_name> is the name of the machineset and <n> is the number of worker nodes. 3.7. Verifying the HostFirmware Settings resource is valid When the user edits the spec.settings section to make a change to the HostFirmwareSetting (HFS) resource, the Bare Metal Operator (BMO) validates the change against the FimwareSchema resource, which is a read-only resource. If the setting is invalid, the BMO will set the Type value of the status.Condition setting to False and also generate an event and store it in the HFS resource. Use the following procedure to verify that the resource is valid. Procedure Get a list of HostFirmwareSetting resources: USD oc get hfs -n openshift-machine-api Verify that the HostFirmwareSettings resource for a particular host is valid: USD oc describe hfs <host_name> -n openshift-machine-api Where <host_name> is the name of the host. Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo Important If the response returns ValidationFailed , there is an error in the resource configuration and you must update the values to conform to the FirmwareSchema resource. 3.8. About the FirmwareSchema resource BIOS settings vary among hardware vendors and host models. A FirmwareSchema resource is a read-only resource that contains the types and limits for each BIOS setting on each host model. The data comes directly from the BMC through Ironic. The FirmwareSchema enables you to identify valid values you can specify in the spec field of the HostFirmwareSettings resource. The FirmwareSchema resource has a unique identifier derived from its settings and limits. Identical host models use the same FirmwareSchema identifier. It is likely that multiple instances of HostFirmwareSettings use the same FirmwareSchema . Table 3.4. FirmwareSchema specification Parameters Description The spec is a simple map consisting of the BIOS setting name and the limits of the setting. The fields include: attribute_type : The type of setting. The supported types are: Enumeration Integer String Boolean allowable_values : A list of allowable values when the attribute_type is Enumeration . lower_bound : The lowest allowed value when attribute_type is Integer . upper_bound : The highest allowed value when attribute_type is Integer . min_length : The shortest string length that the value can have when attribute_type is String . max_length : The longest string length that the value can have when attribute_type is String . read_only : The setting is read only and cannot be modified. unique : The setting is specific to this host. 3.9. Getting the FirmwareSchema resource Each host model from each vendor has different BIOS settings. When editing the HostFirmwareSettings resource's spec section, the name/value pairs you set must conform to that host's firmware schema. To ensure you are setting valid name/value pairs, get the FirmwareSchema for the host and review it. Procedure To get a list of FirmwareSchema resource instances, execute the following: USD oc get firmwareschema -n openshift-machine-api To get a particular FirmwareSchema instance, execute: USD oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml Where <instance_name> is the name of the schema instance stated in the HostFirmwareSettings resource (see Table 3).
[ "bmc: address: credentialsName: disableCertificateVerification:", "image: url: checksum: checksumType: format:", "raid: hardwareRAIDVolumes: softwareRAIDVolumes:", "spec: raid: hardwareRAIDVolume: []", "rootDeviceHints: deviceName: hctl: model: vendor: serialNumber: minSizeGigabytes: wwn: wwnWithExtension: wwnVendorExtension: rotational:", "hardware: cpu arch: model: clockMegahertz: flags: count:", "hardware: firmware:", "hardware: nics: - ip: name: mac: speedGbps: vlans: vlanId: pxe:", "hardware: ramMebibytes:", "hardware: storage: - name: rotational: sizeBytes: serialNumber:", "hardware: systemVendor: manufacturer: productName: serialNumber:", "provisioning: state: id: image: raid: firmware: rootDeviceHints:", "oc get bmh -n openshift-machine-api -o yaml", "oc get bmh -n openshift-machine-api", "oc get bmh <host_name> -n openshift-machine-api -o yaml", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: creationTimestamp: \"2022-06-16T10:48:33Z\" finalizers: - baremetalhost.metal3.io generation: 2 name: openshift-worker-0 namespace: openshift-machine-api resourceVersion: \"30099\" uid: 1513ae9b-e092-409d-be1b-ad08edeb1271 spec: automatedCleaningMode: metadata bmc: address: redfish://10.46.61.19:443/redfish/v1/Systems/1 credentialsName: openshift-worker-0-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:c7:f7:b0 bootMode: UEFI consumerRef: apiVersion: machine.openshift.io/v1beta1 kind: Machine name: ocp-edge-958fk-worker-0-nrfcg namespace: openshift-machine-api customDeploy: method: install_coreos online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: worker-user-data-managed namespace: openshift-machine-api status: errorCount: 0 errorMessage: \"\" goodCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\" hardware: cpu: arch: x86_64 clockMegahertz: 2300 count: 64 flags: - 3dnowprefetch - abm - acpi - adx - aes model: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz firmware: bios: date: 10/26/2020 vendor: HPE version: U30 hostname: openshift-worker-0 nics: - mac: 48:df:37:c7:f7:b3 model: 0x8086 0x1572 name: ens1f3 ramMebibytes: 262144 storage: - hctl: \"0:0:0:0\" model: VK000960GWTTB name: /dev/disk/by-id/scsi-<serial_number> sizeBytes: 960197124096 type: SSD vendor: ATA systemVendor: manufacturer: HPE productName: ProLiant DL380 Gen10 (868703-B21) serialNumber: CZ200606M3 lastUpdated: \"2022-06-16T11:41:42Z\" operationalStatus: OK poweredOn: true provisioning: ID: 217baa14-cfcf-4196-b764-744e184a3413 bootMode: UEFI customDeploy: method: install_coreos image: url: \"\" raid: hardwareRAIDVolumes: null softwareRAIDVolumes: [] rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> state: provisioned triedCredentials: credentials: name: openshift-worker-0-bmc-secret namespace: openshift-machine-api credentialsVersion: \"16120\"", "spec: settings: ProcTurboMode: Disabled 1", "status: conditions: - lastTransitionTime: message: observedGeneration: reason: status: type:", "status: schema: name: namespace: lastUpdated:", "status: settings:", "oc get hfs -n openshift-machine-api -o yaml", "oc get hfs -n openshift-machine-api", "oc get hfs <host_name> -n openshift-machine-api -o yaml", "oc get hfs -n openshift-machine-api", "oc edit hfs <host_name> -n openshift-machine-api", "spec: settings: name: value 1", "oc get bmh <host_name> -n openshift-machine name", "oc annotate machine <machine_name> machine.openshift.io/delete-machine=true -n openshift-machine-api", "oc get nodes", "oc get machinesets -n openshift-machine-api", "oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n-1>", "oc scale machineset <machineset_name> -n openshift-machine-api --replicas=<n>", "oc get hfs -n openshift-machine-api", "oc describe hfs <host_name> -n openshift-machine-api", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ValidationFailed 2m49s metal3-hostfirmwaresettings-controller Invalid BIOS setting: Setting ProcTurboMode is invalid, unknown enumeration value - Foo", "<BIOS_setting_name> attribute_type: allowable_values: lower_bound: upper_bound: min_length: max_length: read_only: unique:", "oc get firmwareschema -n openshift-machine-api", "oc get firmwareschema <instance_name> -n openshift-machine-api -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/post-installation_configuration/post-install-bare-metal-configuration
Chapter 1. Creating a cluster on AWS
Chapter 1. Creating a cluster on AWS You can deploy OpenShift Dedicated on Amazon Web Services (AWS) by using your own AWS account through the Customer Cloud Subscription (CCS) model or by using an AWS infrastructure account that is owned by Red Hat. 1.1. Prerequisites You reviewed the introduction to OpenShift Dedicated and the documentation on architecture concepts . You reviewed the OpenShift Dedicated cloud deployment options . 1.2. Creating a cluster on AWS By using the Customer Cloud Subscription (CCS) billing model, you can create an OpenShift Dedicated cluster in an existing Amazon Web Services (AWS) account that you own. You can also select the Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat. Complete the following prerequisites to use the CCS model to deploy and manage OpenShift Dedicated into your AWS account. Prerequisites You have configured your AWS account for use with OpenShift Dedicated. You have not deployed any services in your AWS account. You have configured the AWS account quotas and limits that are required to support the desired cluster size. You have an osdCcsAdmin AWS Identity and Access Management (IAM) user with the AdministratorAccess policy attached. You have set up a service control policy (SCP) in your AWS organization. For more information, see Minimum required service control policy (SCP) . Consider having Business Support or higher from AWS. If you are configuring a cluster-wide proxy, you have verified that the proxy is accessible from the VPC that the cluster is being installed into. The proxy must also be accessible from the private subnets of the VPC. Procedure Log in to OpenShift Cluster Manager . On the Overview page, select Create cluster in the Red Hat OpenShift Dedicated card. Under Billing model , configure the subscription type and infrastructure type: Select a subscription type. For information about OpenShift Dedicated subscription options, see Cluster subscriptions and registration in the OpenShift Cluster Manager documentation. Note The subscription types that are available to you depend on your OpenShift Dedicated subscriptions and resource quotas. For more information, contact your sales representative or Red Hat support. Select the Customer Cloud Subscription infrastructure type to deploy OpenShift Dedicated in an existing cloud provider account that you own or select Red Hat cloud account infrastructure type to deploy OpenShift Dedicated in a cloud provider account that is owned by Red Hat. Click . Select Run on Amazon Web Services . If you are provisioning your cluster in an AWS account, complete the following substeps: Review and complete the listed Prerequisites . Select the checkbox to acknowledge that you have read and completed all of the prerequisites. Provide your AWS account details: Enter your AWS account ID . Enter your AWS access key ID and AWS secret access key for your AWS IAM user account. Note Revoking these credentials in AWS results in a loss of access to any cluster created with these credentials. Optional: You can select Bypass AWS service control policy (SCP) checks to disable the SCP checks. Note Some AWS SCPs can cause the installation to fail, even if you have the required permissions. Disabling the SCP checks allows an installation to proceed. The SCP is still enforced even if the checks are bypassed. Click to validate your cloud provider account and go to the Cluster details page. On the Cluster details page, provide a name for your cluster and specify the cluster details: Add a Cluster name . Optional: Cluster creation generates a domain prefix as a subdomain for your provisioned cluster on openshiftapps.com . If the cluster name is less than or equal to 15 characters, that name is used for the domain prefix. If the cluster name is longer than 15 characters, the domain prefix is randomly generated to a 15 character string. To customize the subdomain, select the Create customize domain prefix checkbox, and enter your domain prefix name in the Domain prefix field. The domain prefix cannot be longer than 15 characters, must be unique within your organization, and cannot be changed after cluster creation. Select a cluster version from the Version drop-down menu. Select a cloud provider region from the Region drop-down menu. Select a Single zone or Multi-zone configuration. Leave Enable user workload monitoring selected to monitor your own projects in isolation from Red Hat Site Reliability Engineer (SRE) platform metrics. This option is enabled by default. Optional: Expand Advanced Encryption to make changes to encryption settings. Accept the default setting Use default KMS Keys to use your default AWS KMS key, or select Use Custom KMS keys to use a custom KMS key. With Use Custom KMS keys selected, enter the AWS Key Management Service (KMS) custom key Amazon Resource Name (ARN) ARN in the Key ARN field. The key is used for encrypting all control plane, infrastructure, worker node root volumes, and persistent volumes in your cluster. Optional: Select Enable FIPS cryptography if you require your cluster to be FIPS validated. Note If Enable FIPS cryptography is selected, Enable additional etcd encryption is enabled by default and cannot be disabled. You can select Enable additional etcd encryption without selecting Enable FIPS cryptography . Optional: Select Enable additional etcd encryption if you require etcd key value encryption. With this option, the etcd key values are encrypted, but the keys are not. This option is in addition to the control plane storage encryption that encrypts the etcd volumes in OpenShift Dedicated clusters by default. Note By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Consider enabling etcd encryption only if you specifically require it for your use case. Click . On the Default machine pool page, select a Compute node instance type from the drop-down menu. Optional: Select the Enable autoscaling checkbox to enable autoscaling. Click Edit cluster autoscaling settings to make changes to the autoscaling settings. Once you have made your desired changes, click Close . Select a minimum and maximum node count. Node counts can be selected by engaging the available plus and minus signs or inputting the desired node count into the number input field. Select a Compute node count from the drop-down menu. Note After your cluster is created, you can change the number of compute nodes in your cluster, but you cannot change the compute node instance type in a machine pool. The number and types of nodes available to you depend on your OpenShift Dedicated subscription. Choose your preference for the Instance Metadata Service (IMDS) type, either using both IMDSv1 and IMDSv2 types or requiring your EC2 instances to use only IMDSv2. You can access instance metadata from a running instance in two ways: Instance Metadata Service Version 1 (IMDSv1) - a request/response method Instance Metadata Service Version 2 (IMDSv2) - a session-oriented method Important The Instance Metadata Service settings cannot be changed after your cluster is created. Note IMDSv2 uses session-oriented requests. With session-oriented requests, you create a session token that defines the session duration, which can range from a minimum of one second to a maximum of six hours. During the specified duration, you can use the same session token for subsequent requests. After the specified duration expires, you must create a new session token to use for future requests. For more information regarding IMDS, see Instance metadata and user data in the AWS documentation. Optional: Expand Edit node labels to add labels to your nodes. Click Add label to add more node labels and select . On the Network configuration page, select Public or Private to use either public or private API endpoints and application routes for your cluster. Important If you are using private API endpoints, you cannot access your cluster until you update the network settings in your cloud provider account. Optional: To install the cluster in an existing AWS Virtual Private Cloud (VPC): Select Install into an existing VPC . If you are installing into an existing VPC and opted to use private API endpoints, you can select Use a PrivateLink . This option enables connections to the cluster by Red Hat Site Reliability Engineering (SRE) using only AWS PrivateLink endpoints. Note The Use a PrivateLink option cannot be changed after a cluster is created. If you are installing into an existing VPC and you want to enable an HTTP or HTTPS proxy for your cluster, select Configure a cluster-wide proxy . If you opted to install the cluster in an existing AWS VPC, provide your Virtual Private Cloud (VPC) subnet settings and select . You must have created the Cloud network address translation (NAT) and a Cloud router. See the "Additional resources" section for information about Cloud NATs and Google VPCs. Note You must ensure that your VPC is configured with a public and a private subnet for each availability zone that you want the cluster installed into. If you opted to use PrivateLink, only private subnets are required. Optional: Expand Additional security groups and select additional custom security groups to apply to nodes in the machine pools that are created by default. You must have already created the security groups and associated them with the VPC that you selected for this cluster. You cannot add or edit security groups to the default machine pools after you create the cluster. By default, the security groups you specify are added for all node types. Clear the Apply the same security groups to all node types checkbox to apply different security groups for each node type. For more information, see the requirements for Security groups under Additional resources . Accept the default application ingress settings, or to create your own custom settings, select Custom Settings . Optional: Provide route selector. Optional: Provide excluded namespaces. Select a namespace ownership policy. Select a wildcard policy. For more information about custom application ingress settings, click the information icon provided for each setting. If you opted to configure a cluster-wide proxy, provide your proxy configuration details on the Cluster-wide proxy page: Enter a value in at least one of the following fields: Specify a valid HTTP proxy URL . Specify a valid HTTPS proxy URL . In the Additional trust bundle field, provide a PEM encoded X.509 certificate bundle. The bundle is added to the trusted certificate store for the cluster nodes. An additional trust bundle file is required if you use a TLS-inspecting proxy unless the identity certificate for the proxy is signed by an authority from the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. This requirement applies regardless of whether the proxy is transparent or requires explicit configuration using the http-proxy and https-proxy arguments. Click . For more information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy . In the CIDR ranges dialog, configure custom classless inter-domain routing (CIDR) ranges or use the defaults that are provided. Note If you are installing into a VPC, the Machine CIDR range must match the VPC subnets. Important CIDR configurations cannot be changed later. Confirm your selections with your network administrator before proceeding. On the Cluster update strategy page, configure your update preferences: Choose a cluster update method: Select Individual updates if you want to schedule each update individually. This is the default option. Select Recurring updates to update your cluster on your preferred day and start time, when updates are available. Note You can review the end-of-life dates in the update lifecycle documentation for OpenShift Dedicated. For more information, see OpenShift Dedicated update life cycle . If you opted for recurring updates, select a preferred day of the week and upgrade start time in UTC from the drop-down menus. Optional: You can set a grace period for Node draining during cluster upgrades. A 1 hour grace period is set by default. Click . Note If critical security concerns that significantly impact the security or stability of a cluster occur, Red Hat Site Reliability Engineering (SRE) might schedule automatic updates to the latest z-stream version that is not impacted. The updates are applied within 48 hours after customer notifications are provided. For a description of the critical impact security rating, see Understanding Red Hat security ratings . Review the summary of your selections and click Create cluster to start the cluster installation. The installation takes approximately 30-40 minutes to complete. Optional: On the Overview tab, you can enable the delete protection feature by selecting Enable , which is located directly under Delete Protection: Disabled . This will prevent your cluster from being deleted. To disable delete protection, select Disable . By default, clusters are created with the delete protection feature disabled. Verification You can monitor the progress of the installation in the Overview page for your cluster. You can view the installation logs on the same page. Your cluster is ready when the Status in the Details section of the page is listed as Ready . 1.3. Additional resources For information about configuring a proxy with OpenShift Dedicated, see Configuring a cluster-wide proxy . For details about the AWS service control policies required for CCS deployments, see Minimum required service control policy (SCP) . For information about persistent storage for OpenShift Dedicated, see the Storage section in the OpenShift Dedicated service definition. For information about load balancers for OpenShift Dedicated, see the Load balancers section in the OpenShift Dedicated service definition. For more information about etcd encryption, see the etcd encryption service definition . For information about the end-of-life dates for OpenShift Dedicated versions, see the OpenShift Dedicated update life cycle . For information about the requirements for custom additional security groups, see Additional custom security groups . For information about configuring identity providers, see Configuring identity providers . For information about revoking cluster privileges, see Revoking privileges and access to an OpenShift Dedicated cluster .
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/openshift_dedicated_clusters_on_aws/osd-creating-a-cluster-on-aws
Part IV. Deprecated Functionality
Part IV. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases of Red Hat Enterprise Linux 7 up to Red Hat Enterprise Linux 7.2. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/part-red_hat_enterprise_linux-7.2_release_notes-deprecated_functionality
3.2.2. Direct Routing and iptables
3.2.2. Direct Routing and iptables You may also work around the ARP issue using the direct routing method by creating iptables firewall rules. To configure direct routing using iptables , you must add rules that create a transparent proxy so that a real server will service packets sent to the VIP address, even though the VIP address does not exist on the system. The iptables method is simpler to configure than the arptables_jf method. This method also circumvents the LVS ARP issue entirely, because the virtual IP address(es) only exist on the active LVS director. However, there are performance issues using the iptables method compared to arptables_jf , as there is overhead in forwarding/masquerading every packet. You also cannot reuse ports using the iptables method. For example, it is not possible to run two separate Apache HTTP Server services bound to port 80, because both must bind to INADDR_ANY instead of the virtual IP addresses. To configure direct routing using the iptables method, perform the following steps: On each real server, run the following command for every VIP, port, and protocol (TCP or UDP) combination intended to be serviced for the real server: iptables -t nat -A PREROUTING -p <tcp|udp> -d <vip> --dport <port> -j REDIRECT This command will cause the real servers to process packets destined for the VIP and port that they are given. Save the configuration on each real server: The commands above cause the system to reload the iptables configuration on bootup - before the network is started.
[ "service iptables save chkconfig --level 2345 iptables on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s2-lvs-direct-iptables-vsa
Chapter 3. Consumer configuration properties
Chapter 3. Consumer configuration properties key.deserializer Type: class Importance: high Deserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface. value.deserializer Type: class Importance: high Deserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface. bootstrap.servers Type: list Default: "" Valid Values: non-null string Importance: high A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping-this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,... . Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down). fetch.min.bytes Type: int Default: 1 Valid Values: [0,... ] Importance: high The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as that many byte(s) of data is available or the fetch request times out waiting for data to arrive. Setting this to a larger value will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency. group.id Type: string Default: null Importance: high A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy. group.protocol Type: string Default: classic Valid Values: (case insensitive) [CONSUMER, CLASSIC] Importance: high The group protocol consumer should use. We currently support "classic" or "consumer". If "consumer" is specified, then the consumer group protocol will be used. Otherwise, the classic group protocol will be used. heartbeat.interval.ms Type: int Default: 3000 (3 seconds) Importance: high The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms , but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances. max.partition.fetch.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: high The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size. session.timeout.ms Type: int Default: 45000 (45 seconds) Importance: high The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by group.min.session.timeout.ms and group.max.session.timeout.ms . ssl.key.password Type: password Default: null Importance: high The password of the private key in the key store file or the PEM key specified in 'ssl.keystore.key'. ssl.keystore.certificate.chain Type: password Default: null Importance: high Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: high Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: high The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: high The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.truststore.certificates Type: password Default: null Importance: high Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: high The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: high The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. allow.auto.create.topics Type: boolean Default: true Importance: medium Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using auto.create.topics.enable broker configuration. This configuration must be set to false when using brokers older than 0.11.0. auto.offset.reset Type: string Default: latest Valid Values: [latest, earliest, none] Importance: medium What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset none: throw exception to the consumer if no offset is found for the consumer's group anything else: throw exception to the consumer. Note that altering partition numbers while setting this config to latest may cause message delivery loss since producers could start to send messages to newly added partitions (i.e. no initial offsets exist yet) before consumers reset their offsets. client.dns.lookup Type: string Default: use_all_dns_ips Valid Values: [use_all_dns_ips, resolve_canonical_bootstrap_servers_only] Importance: medium Controls how the client uses DNS lookups. If set to use_all_dns_ips , connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to resolve_canonical_bootstrap_servers_only , resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as use_all_dns_ips . connections.max.idle.ms Type: long Default: 540000 (9 minutes) Importance: medium Close idle connections after the number of milliseconds specified by this config. default.api.timeout.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: medium Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a timeout parameter. enable.auto.commit Type: boolean Default: true Importance: medium If true the consumer's offset will be periodically committed in the background. exclude.internal.topics Type: boolean Default: true Importance: medium Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic. fetch.max.bytes Type: int Default: 52428800 (50 mebibytes) Valid Values: [0,... ] Importance: medium The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. group.instance.id Type: string Default: null Valid Values: non-empty string Importance: medium A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior. group.remote.assignor Type: string Default: null Importance: medium The server-side assignor to use. If no assignor is specified, the group coordinator will pick one. This configuration is applied only if group.protocol is set to "consumer". isolation.level Type: string Default: read_uncommitted Valid Values: [read_committed, read_uncommitted] Importance: medium Controls how to read messages written transactionally. If set to read_committed , consumer.poll() will only return transactional messages which have been committed. If set to read_uncommitted (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. Messages will always be returned in offset order. Hence, in read_committed mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, read_committed consumers will not be able to read up to the high watermark when there are in flight transactions. max.poll.interval.ms Type: int Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null group.instance.id which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of session.timeout.ms . This mirrors the behavior of a static consumer which has shutdown. max.poll.records Type: int Default: 500 Valid Values: [1,... ] Importance: medium The maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetching behavior. The consumer will cache the records from each fetch request and returns them incrementally from each poll. partition.assignment.strategy Type: list Default: class org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor Valid Values: non-null string Importance: medium A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used. Available options are: org.apache.kafka.clients.consumer.RangeAssignor : Assigns partitions on a per-topic basis. org.apache.kafka.clients.consumer.RoundRobinAssignor : Assigns partitions to consumers in a round-robin fashion. org.apache.kafka.clients.consumer.StickyAssignor : Guarantees an assignment that is maximally balanced while preserving as many existing partition assignments as possible. org.apache.kafka.clients.consumer.CooperativeStickyAssignor : Follows the same StickyAssignor logic, but allows for cooperative rebalancing. The default assignor is [RangeAssignor, CooperativeStickyAssignor], which will use the RangeAssignor by default, but allows upgrading to the CooperativeStickyAssignor with just a single rolling bounce that removes the RangeAssignor from the list. Implementing the org.apache.kafka.clients.consumer.ConsumerPartitionAssignor interface allows you to plug in a custom assignment strategy. receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used. request.timeout.ms Type: int Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: medium The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.client.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.jaas.config Type: password Default: null Importance: medium JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.service.name Type: string Default: null Importance: medium The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.login.callback.handler.class Type: class Default: null Importance: medium The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.mechanism Type: string Default: GSSAPI Importance: medium SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. security.protocol Type: string Default: PLAINTEXT Valid Values: (case insensitive) [SASL_SSL, PLAINTEXT, SSL, SASL_PLAINTEXT] Importance: medium Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. send.buffer.bytes Type: int Default: 131072 (128 kibibytes) Valid Values: [-1,... ] Importance: medium The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the socket.connection.setup.timeout.max.ms value. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.keystore.type Type: string Default: JKS Importance: medium The file format of the key store file. This is optional for client. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. ssl.protocol Type: string Default: TLSv1.3 Importance: medium The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.truststore.type Type: string Default: JKS Importance: medium The file format of the trust store file. The values currently supported by the default ssl.engine.factory.class are [JKS, PKCS12, PEM]. auto.commit.interval.ms Type: int Default: 5000 (5 seconds) Valid Values: [0,... ] Importance: low The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true . auto.include.jmx.reporter Type: boolean Default: true Importance: low Deprecated. Whether to automatically include JmxReporter even if it's not listed in metric.reporters . This configuration will be removed in Kafka 4.0, users should instead include org.apache.kafka.common.metrics.JmxReporter in metric.reporters in order to enable the JmxReporter. check.crcs Type: boolean Default: true Importance: low Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. client.id Type: string Default: "" Importance: low An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging. client.rack Type: string Default: "" Importance: low A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'. enable.metrics.push Type: boolean Default: true Importance: low Whether to enable pushing of client metrics to the cluster, if the cluster has a client metrics subscription which matches this client. fetch.max.wait.ms Type: int Default: 500 Valid Values: [0,... ] Importance: low The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes. This config is used only for local log fetch. To tune the remote fetch maximum wait time, please refer to 'remote.fetch.max.wait.ms' broker config. interceptor.classes Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as interceptors. Implementing the org.apache.kafka.clients.consumer.ConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors. metadata.max.age.ms Type: long Default: 300000 (5 minutes) Valid Values: [0,... ] Importance: low The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions. metadata.recovery.strategy Type: string Default: none Valid Values: (case insensitive) [REBOOTSTRAP, NONE] Importance: low Controls how the client recovers when none of the brokers known to it is available. If set to none , the client fails. If set to rebootstrap , the client repeats the bootstrap process using bootstrap.servers . Rebootstrapping is useful when a client communicates with brokers so infrequently that the set of brokers may change entirely before the client refreshes metadata. Metadata recovery is triggered when all last-known brokers appear unavailable simultaneously. Brokers appear unavailable when disconnected and no current retry attempt is in-progress. Consider increasing reconnect.backoff.ms and reconnect.backoff.max.ms and decreasing socket.connection.setup.timeout.ms and socket.connection.setup.timeout.max.ms for the client. metric.reporters Type: list Default: "" Valid Values: non-null string Importance: low A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Valid Values: [INFO, DEBUG, TRACE] Importance: low The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [0,... ] Importance: low The window of time a metrics sample is computed over. reconnect.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms. reconnect.backoff.ms Type: long Default: 50 Valid Values: [0,... ] Importance: low The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker. This value is the initial backoff value and will increase exponentially for each consecutive connection failure, up to the reconnect.backoff.max.ms value. retry.backoff.max.ms Type: long Default: 1000 (1 second) Valid Values: [0,... ] Importance: low The maximum amount of time in milliseconds to wait when retrying a request to the broker that has repeatedly failed. If provided, the backoff per client will increase exponentially for each failed request, up to this maximum. To prevent all clients from being synchronized upon retry, a randomized jitter with a factor of 0.2 will be applied to the backoff, resulting in the backoff falling within a range between 20% below and 20% above the computed value. If retry.backoff.ms is set to be higher than retry.backoff.max.ms , then retry.backoff.max.ms will be used as a constant backoff from the beginning without any exponential increase. retry.backoff.ms Type: long Default: 100 Valid Values: [0,... ] Importance: low The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. This value is the initial backoff value and will increase exponentially for each failed request, up to the retry.backoff.max.ms value. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: low Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: low Login thread sleep time between refresh attempts. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: low Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: low Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.connect.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.refresh.buffer.seconds Type: short Default: 300 Valid Values: [0,... ,3600] Importance: low The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Valid Values: [0,... ,900] Importance: low The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Valid Values: [0.5,... ,1.0] Importance: low Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Valid Values: [0.0,... ,0.25] Importance: low The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.cipher.suites Type: list Default: null Importance: low A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. Alternatively, setting this to org.apache.kafka.common.security.ssl.CommonNameLoggingSslEngineFactory will log the common name of expired SSL certificates used by clients to authenticate at any of the brokers with log level INFO. Note that this will cause a tiny delay during establishment of new connections from mTLS clients to brokers due to the extra code for examining the certificate chain provided by the client. Note further that the implementation uses a custom truststore based on the standard Java truststore and thus might be considered a security risk due to not being as mature as the standard one. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: low The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.secure.random.implementation Type: string Default: null Importance: low The SecureRandom PRNG implementation to use for SSL cryptography operations. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: low The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.
[ "Further, when in `read_committed` the seekToEnd method will return the LSO ." ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/consumer-configuration-properties-str
Chapter 4. Control Group Application Examples
Chapter 4. Control Group Application Examples This chapter provides application examples that take advantage of the cgroup functionality. 4.1. Prioritizing Database I/O Running each instance of a database server inside its own dedicated virtual guest allows you to allocate resources per database based on their priority. Consider the following example: a system is running two database servers inside two KVM guests. One of the databases is a high priority database and the other one a low priority database. When both database servers are run simultaneously, the I/O throughput is decreased to accommodate requests from both databases equally; Figure 4.1, "I/O throughput without resource allocation" indicates this scenario - once the low priority database is started (around time 45), I/O throughput is the same for both database servers. Figure 4.1. I/O throughput without resource allocation To prioritize the high priority database server, it can be assigned to a cgroup with a high number of reserved I/O operations, whereas the low priority database server can be assigned to a cgroup with a low number of reserved I/O operations. To achieve this, follow the steps in Procedure 4.1, "I/O throughput prioritization" , all of which are performed on the host system. Procedure 4.1. I/O throughput prioritization Attach the blkio subsystem to the /cgroup/blkio cgroup: Create a high and low priority cgroup: Acquire the PIDs of the processes that represent both virtual guests (in which the database servers are running) and move them to their specific cgroup. In our example, VM_high represents a virtual guest running a high priority database server, and VM_low represents a virtual guest running a low priority database server. For example: Set a ratio of 10:1 for the high_prio and low_prio cgroups. Processes in those cgroups (that is, processes running the virtual guests that have been added to those cgroups in the step) will immediately use only the resources made available to them. In our example, the low priority cgroup permits the low priority database server to use only about 10% of the I/O operations, whereas the high priority cgroup permits the high priority database server to use about 90% of the I/O operations. Figure 4.2, "I/O throughput with resource allocation" illustrates the outcome of limiting the low priority database and prioritizing the high priority database. As soon as the database servers are moved to their appropriate cgroups (around time 75), I/O throughput is divided among both servers with the ratio of 10:1. Figure 4.2. I/O throughput with resource allocation Alternatively, block device I/O throttling can be used for the low priority database to limit its number of read and write operation. For more information on the blkio subsystem, refer to Section 3.1, "blkio" .
[ "~]# mkdir /cgroup/blkio ~]# mount -t cgroup -o blkio blkio /cgroup/blkio", "~]# mkdir /cgroup/blkio/high_prio ~]# mkdir /cgroup/blkio/low_prio", "~]# ps -eLf | grep qemu | grep VM_high | awk '{print USD4}' | while read pid; do echo USDpid >> /cgroup/blkio/high_prio/tasks; done ~]# ps -eLf | grep qemu | grep VM_low | awk '{print USD4}' | while read pid; do echo USDpid >> /cgroup/blkio/low_prio/tasks; done", "~]# echo 1000 > /cgroup/blkio/high_prio/blkio.weight ~]# echo 100 > /cgroup/blkio/low_prio/blkio.weight" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/control-group-application-examples
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_the_overcloud_with_an_existing_red_hat_ceph_storage_cluster/making-open-source-more-inclusive
Release Notes for .NET 6.0 containers
Release Notes for .NET 6.0 containers .NET 6.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_containers/index
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/using_your_subscription
Chapter 142. XJ
Chapter 142. XJ Since Camel 3.0 Only producer is supported The XJ component allows you to convert XML and JSON documents directly forth and back without the need of intermediate java objects. You can even specify an XSLT stylesheet to convert directly to the target JSON / XML (domain) model. 142.1. Dependencies When using xj with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xj-starter</artifactId> </dependency> 142.2. URI format Note The XJ component extends the XSLT component and therefore it supports all options provided by the XSLT component as well. At least look at the XSLT component documentation how to configure the xsl template. The transformDirection option is mandatory and must be either XML2JSON or JSON2XML. The templateName parameter allows to use identify transforma by specifying the name identity . 142.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 142.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 142.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 142.4. Component Options The XJ component supports 11 options, which are listed below. Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonConfigurationProperties (advanced) To set custom Saxon configuration properties. Map saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory 142.5. Endpoint Options The XJ endpoint is configured using URI syntax: with the following path and query parameters: 142.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 142.5.2. Query Parameters (19 parameters) Name Description Default Type allowStAX (producer) Whether to allow using StAX as the javax.xml.transform.Source. You can enable this if the XSLT library supports StAX such as the Saxon library (camel-saxon). The Xalan library (default in JVM) does not support StAXSource. true boolean contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key XsltConstants.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. Enum values: string bytes DOM file string XsltOutput transformDirection (producer) Required Transform direction. Either XML2JSON or JSON2XML. Enum values: XML2JSON JSON2XML TransformDirection transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory saxonConfiguration (advanced) To use a custom Saxon configuration. Configuration saxonExtensionFunctions (advanced) Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can comma to separate multiple values to lookup. String secureProcessing (advanced) Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true boolean transformerFactory (advanced) To use a custom XSLT transformer factory. TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom javax.xml.transform.URIResolver. URIResolver xsltMessageLogger (advanced) A consumer to messages generated during XSLT transformations. XsltMessageLogger 142.6. Message Headers The XJ component supports 1 message header(s), which is/are listed below: Name Description Default Type CamelXsltFileName (producer) Constant: XSLT_FILE_NAME The XSLT file name. String 142.7. Using XJ endpoints 142.7.1. Converting JSON to XML The following route does an "identity" transform of the message because no xslt stylesheet is given. In the context of xml to xml transformations, "Identity" transform means that the output document is just a copy of the input document. In case of XJ it means it transforms the json document to an equivalent xml representation. from("direct:start"). to("xj:identity?transformDirection=JSON2XML"); Sample: The input: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } will output <?xml version="1.0" encoding="UTF-8"?> <object xmlns:xj="http://camel.apache.org/component/xj" xj:type="object"> <object xj:name="firstname" xj:type="string">camel</object> <object xj:name="lastname" xj:type="string">apache</object> <object xj:name="personalnumber" xj:type="int">42</object> <object xj:name="active" xj:type="boolean">true</object> <object xj:name="ranking" xj:type="float">3.1415926</object> <object xj:name="roles" xj:type="array"> <object xj:type="string">a</object> <object xj:type="object"> <object xj:name="x" xj:type="null">null</object> </object> </object> <object xj:name="state" xj:type="object"> <object xj:name="needsWater" xj:type="boolean">true</object> </object> </object> As can be seen in the output above, XJ writes some metadata in the resulting xml that can be used in further processing: XJ metadata nodes are always in the http://camel.apache.org/component/xj namespace. JSON key names are placed in the xj:name attribute. The parsed JSON type can be found in the xj:type attribute. The above example already contains all possible types. Generated XML elements are always named "object". Now we can apply a stylesheet, for example: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xj="http://camel.apache.org/component/xj" exclude-result-prefixes="xj"> <xsl:output omit-xml-declaration="no" encoding="UTF-8" method="xml" indent="yes"/> <xsl:template match="/"> <person> <xsl:apply-templates select="//object"/> </person> </xsl:template> <xsl:template match="object[@xj:type != 'object' and @xj:type != 'array' and string-length(@xj:name) > 0]"> <xsl:variable name="name" select="@xj:name"/> <xsl:element name="{USDname}"> <xsl:value-of select="text()"/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"/> </xsl:stylesheet> to the above sample by specifying the template on the endpoint: from("direct:start"). to("xj:com/example/json2xml.xsl?transformDirection=JSON2XML"); and get the following output: <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <x>null</x> <needsWater>true</needsWater> </person> 142.7.2. Converting XML to JSON Based on the explanations above an "identity" transform will be performed when no stylesheet is given: from("direct:start"). to("xj:identity?transformDirection=XML2JSON"); Given the sample input <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <roles> <entry>a</entry> <entry> <x>null</x> </entry> </roles> <state> <needsWater>true</needsWater> </state> </person> will result in { "firstname": "camel", "lastname": "apache", "personalnumber": "42", "active": "true", "ranking": "3.1415926", "roles": [ "a", { "x": "null" } ], "state": { "needsWater": "true" } } You may have noted that the input xml and output json is very similar to the examples above when converting from json to xml altough nothing special is done here. We only transformed an arbitrary XML document to json. XJ uses the following rules by default: The XML root element can be named somehow, it will always end in a json root object declaration '\{}' The json key name is the name of the xml element If there is an name clash as in "<roles>" above where two "<entry>" elements exists a json array will be generated. XML elements with text-only-child-nodes will result in the usual key/string-value pair. Mixed content elements results in key/child-object pair as seen in "<state>" above. Now we can apply again a stylesheet, for example: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xj="http://camel.apache.org/component/xj" exclude-result-prefixes="xj"> <xsl:output omit-xml-declaration="no" encoding="UTF-8" method="xml" indent="yes"/> <xsl:template match="/"> <xsl:apply-templates/> </xsl:template> <xsl:template match="personalnumber"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'int'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="active|needsWater"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'boolean'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="ranking"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'float'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="roles"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'array'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="*[normalize-space(text()) = 'null']"> <xsl:element name="{local-name()}"> <xsl:attribute name="xj:type"> <xsl:value-of select="'null'"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> to the sample above by specifying the template on the endpoint: from("direct:start"). to("xj:com/example/xml2json.xsl?transformDirection=XML2JSON"); and get the following output: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } Note, this transformation resulted in exactly the same json document as we used as input to the json2xml convertion. The following XML document is that what is passed to XJ after xsl transformation: <?xml version="1.0" encoding="UTF-8"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber xmlns:xj="http://camel.apache.org/component/xj" xj:type="int">42</personalnumber> <active xmlns:xj="http://camel.apache.org/component/xj" xj:type="boolean">true</active> <ranking xmlns:xj="http://camel.apache.org/component/xj" xj:type="float">3.1415926</ranking> <roles xmlns:xj="http://camel.apache.org/component/xj" xj:type="array"> <entry>a</entry> <entry> <x xj:type="null">null</x> </entry> </roles> <state> <needsWater xmlns:xj="http://camel.apache.org/component/xj" xj:type="boolean">true</needsWater> </state> </person> In the stylesheet we just provided the minimal required type hints to get the same result. The supported type hints are exactly the same as XJ writes to a XML document when converting from json to xml. In the end that means that we can feed back in the result document from the json to xml transformation sample above: <?xml version="1.0" encoding="UTF-8"?> <object xmlns:xj="http://camel.apache.org/component/xj" xj:type="object"> <object xj:name="firstname" xj:type="string">camel</object> <object xj:name="lastname" xj:type="string">apache</object> <object xj:name="personalnumber" xj:type="int">42</object> <object xj:name="active" xj:type="boolean">true</object> <object xj:name="ranking" xj:type="float">3.1415926</object> <object xj:name="roles" xj:type="array"> <object xj:type="string">a</object> <object xj:type="object"> <object xj:name="x" xj:type="null">null</object> </object> </object> <object xj:name="state" xj:type="object"> <object xj:name="needsWater" xj:type="boolean">true</object> </object> </object> and get the same output again: { "firstname": "camel", "lastname": "apache", "personalnumber": 42, "active": true, "ranking": 3.1415926, "roles": [ "a", { "x": null } ], "state": { "needsWater": true } } As seen in the example above: * xj:type lets you specify exactly the desired output type * xj:name lets you overrule the json key name. This is required when you want to generate key names which contains chars that aren't allowed in XML element names. 142.7.2.1. Available type hints @xj:type= Description object Generate a json object array Generate a json array string Generate a json string int Generate a json number without fractional part float Generate a json number with fractional part boolean Generate a json boolean null Generate an empty value, using the word null 142.8. Spring Boot Auto-Configuration The component supports 12 options, which are listed below. Name Description Default Type camel.component.xj.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xj.content-cache Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true Boolean camel.component.xj.enabled Whether to enable auto configuration of the xj component. This is enabled by default. Boolean camel.component.xj.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xj.saxon-configuration To use a custom Saxon configuration. The option is a net.sf.saxon.Configuration type. Configuration camel.component.xj.saxon-configuration-properties To set custom Saxon configuration properties. Map camel.component.xj.saxon-extension-functions Allows you to use a custom net.sf.saxon.lib.ExtensionFunctionDefinition. You would need to add camel-saxon to the classpath. The function is looked up in the registry, where you can use commas to separate multiple values to lookup. String camel.component.xj.secure-processing Feature for XML secure processing (see javax.xml.XMLConstants). This is enabled by default. However, when using Saxon Professional you may need to turn this off to allow Saxon to be able to use Java extension functions. true Boolean camel.component.xj.transformer-factory-class To use a custom XSLT transformer factory, specified as a FQN class name. String camel.component.xj.transformer-factory-configuration-strategy A configuration strategy to apply on freshly created instances of TransformerFactory. The option is a org.apache.camel.component.xslt.TransformerFactoryConfigurationStrategy type. TransformerFactoryConfigurationStrategy camel.component.xj.uri-resolver To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. The option is a javax.xml.transform.URIResolver type. URIResolver camel.component.xj.uri-resolver-factory To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. The option is a org.apache.camel.component.xslt.XsltUriResolverFactory type. XsltUriResolverFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xj-starter</artifactId> </dependency>", "xj:templateName?transformDirection=XML2JSON|JSON2XML[&options]", "xj:resourceUri", "from(\"direct:start\"). to(\"xj:identity?transformDirection=JSON2XML\");", "{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <object xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"object\"> <object xj:name=\"firstname\" xj:type=\"string\">camel</object> <object xj:name=\"lastname\" xj:type=\"string\">apache</object> <object xj:name=\"personalnumber\" xj:type=\"int\">42</object> <object xj:name=\"active\" xj:type=\"boolean\">true</object> <object xj:name=\"ranking\" xj:type=\"float\">3.1415926</object> <object xj:name=\"roles\" xj:type=\"array\"> <object xj:type=\"string\">a</object> <object xj:type=\"object\"> <object xj:name=\"x\" xj:type=\"null\">null</object> </object> </object> <object xj:name=\"state\" xj:type=\"object\"> <object xj:name=\"needsWater\" xj:type=\"boolean\">true</object> </object> </object>", "<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" xmlns:xj=\"http://camel.apache.org/component/xj\" exclude-result-prefixes=\"xj\"> <xsl:output omit-xml-declaration=\"no\" encoding=\"UTF-8\" method=\"xml\" indent=\"yes\"/> <xsl:template match=\"/\"> <person> <xsl:apply-templates select=\"//object\"/> </person> </xsl:template> <xsl:template match=\"object[@xj:type != 'object' and @xj:type != 'array' and string-length(@xj:name) > 0]\"> <xsl:variable name=\"name\" select=\"@xj:name\"/> <xsl:element name=\"{USDname}\"> <xsl:value-of select=\"text()\"/> </xsl:element> </xsl:template> <xsl:template match=\"@*|node()\"/> </xsl:stylesheet>", "from(\"direct:start\"). to(\"xj:com/example/json2xml.xsl?transformDirection=JSON2XML\");", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <x>null</x> <needsWater>true</needsWater> </person>", "from(\"direct:start\"). to(\"xj:identity?transformDirection=XML2JSON\");", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber>42</personalnumber> <active>true</active> <ranking>3.1415926</ranking> <roles> <entry>a</entry> <entry> <x>null</x> </entry> </roles> <state> <needsWater>true</needsWater> </state> </person>", "{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": \"42\", \"active\": \"true\", \"ranking\": \"3.1415926\", \"roles\": [ \"a\", { \"x\": \"null\" } ], \"state\": { \"needsWater\": \"true\" } }", "<?xml version=\"1.0\" encoding=\"UTF-8\" ?> <xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" xmlns:xj=\"http://camel.apache.org/component/xj\" exclude-result-prefixes=\"xj\"> <xsl:output omit-xml-declaration=\"no\" encoding=\"UTF-8\" method=\"xml\" indent=\"yes\"/> <xsl:template match=\"/\"> <xsl:apply-templates/> </xsl:template> <xsl:template match=\"personalnumber\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'int'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"active|needsWater\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'boolean'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"ranking\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'float'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"roles\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'array'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"*[normalize-space(text()) = 'null']\"> <xsl:element name=\"{local-name()}\"> <xsl:attribute name=\"xj:type\"> <xsl:value-of select=\"'null'\"/> </xsl:attribute> <xsl:apply-templates/> </xsl:element> </xsl:template> <xsl:template match=\"@*|node()\"> <xsl:copy> <xsl:apply-templates select=\"@*|node()\"/> </xsl:copy> </xsl:template> </xsl:stylesheet>", "from(\"direct:start\"). to(\"xj:com/example/xml2json.xsl?transformDirection=XML2JSON\");", "{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <person> <firstname>camel</firstname> <lastname>apache</lastname> <personalnumber xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"int\">42</personalnumber> <active xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"boolean\">true</active> <ranking xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"float\">3.1415926</ranking> <roles xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"array\"> <entry>a</entry> <entry> <x xj:type=\"null\">null</x> </entry> </roles> <state> <needsWater xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"boolean\">true</needsWater> </state> </person>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <object xmlns:xj=\"http://camel.apache.org/component/xj\" xj:type=\"object\"> <object xj:name=\"firstname\" xj:type=\"string\">camel</object> <object xj:name=\"lastname\" xj:type=\"string\">apache</object> <object xj:name=\"personalnumber\" xj:type=\"int\">42</object> <object xj:name=\"active\" xj:type=\"boolean\">true</object> <object xj:name=\"ranking\" xj:type=\"float\">3.1415926</object> <object xj:name=\"roles\" xj:type=\"array\"> <object xj:type=\"string\">a</object> <object xj:type=\"object\"> <object xj:name=\"x\" xj:type=\"null\">null</object> </object> </object> <object xj:name=\"state\" xj:type=\"object\"> <object xj:name=\"needsWater\" xj:type=\"boolean\">true</object> </object> </object>", "{ \"firstname\": \"camel\", \"lastname\": \"apache\", \"personalnumber\": 42, \"active\": true, \"ranking\": 3.1415926, \"roles\": [ \"a\", { \"x\": null } ], \"state\": { \"needsWater\": true } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xj-component-starter
Observability
Observability Red Hat OpenShift Service Mesh 3.0.0tp1 Observability and Service Mesh Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/observability/index
Chapter 1. Extension APIs
Chapter 1. Extension APIs 1.1. APIService [apiregistration.k8s.io/v1] Description APIService represents a server for a particular GroupVersion. Name must be "version.group". Type object 1.2. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object 1.3. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 1.4. ValidatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/extension_apis/extension-apis
Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads
Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads Red Hat OpenShift Data Foundation 4.14 The OpenShift Data Foundation Disaster Recovery capabilities for Metropolitan and Regional regions is now General Available which also includes Disaster Recovery with stretch cluster. Red Hat Storage Documentation Team Abstract The intent of this solution guide is to detail the steps necessary to deploy OpenShift Data Foundation for disaster recovery with Advanced Cluster Management and stretch cluster to achieve a highly available storage infrastructure.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index
Tooling Guide
Tooling Guide Red Hat build of Apache Camel 4.0 Tooling Guide provided by Red Hat Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team https://access.redhat.com/support
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/tooling_guide/index
Chapter 4. Using Samba for Active Directory Integration
Chapter 4. Using Samba for Active Directory Integration Samba implements the Server Message Block (SMB) protocol in Red Hat Enterprise Linux. The SMB protocol is used to access resources on a server, such as file shares and shared printers. You can use Samba to authenticate Active Directory (AD) domain users to a Domain Controller (DC). Additionally, you can use Samba to share printers and local directories to other SMB clients in the network. 4.1. Using winbindd to Authenticate Domain Users Samba's winbindd service provides an interface for the Name Service Switch (NSS) and enables domain users to authenticate to AD when logging into the local system. Using winbindd provides the benefit that you can enhance the configuration to share directories and printers without installing additional software. For further detail, see the section about Samba in the Red Hat System Administrator's Guide . 4.1.1. Joining an AD Domain If you want to join an AD domain and use the Winbind service, use the realm join --client-software=winbind domain_name command. The realm utility automatically updates the configuration files, such as those for Samba, Kerberos, and PAM. For further details and examples, see the Setting up Samba as a Domain Member section in the Red Hat System Administrator's Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/winbind
Chapter 13. Host Status in Satellite
Chapter 13. Host Status in Satellite In Satellite, each host has a global status that indicates which hosts need attention. Each host also has sub-statuses that represent status of a particular feature. With any change of a sub-status, the global status is recalculated and the result is determined by statuses of all sub-statuses. 13.1. Host Global Status Overview The global status represents the overall status of a particular host. The status can have one of three possible values: OK , Warning , or Error . You can find global status on the Hosts Overview page. The status displays a small icon to host name and has a color that corresponds with the status. Hovering over the icon renders a tooltip with sub-status information to quickly find out more details. To view the global status for a host, in the Satellite web UI, navigate to Hosts > All Hosts . OK No errors were reported by any sub-status. This status is highlighted with the color green. Warning While no error was detected, some sub-status raised a warning. For example, there are no configuration management reports for the host even though the host is configured to send reports. It is a good practice to investigate any warnings to ensure that your deployment remains healthy. This status is highlighted with the color yellow. Error Some sub-status reports a failure. For example, a run contains some failed resources. This status is highlighted with the color red. Search syntax If you want to search for hosts according to their status, use the syntax for searching in Satellite that is outlined in the https://access.redhat.com/documentation/en-us/red_hat_satellite/6.11/html-single/administering_red_hat_satellite/index#Searching_and_Bookmarking_admin chapter of the Administering Satellite guide, and then build your searches out using the following status-related examples: To search for hosts that have an OK status: To search for all hosts that deserve attention: 13.2. Host Sub-status Overview A sub-status monitors only part of a host's capabilities. Currently, Satellite ships only with Build and Configuration sub-statuses. There can be more sub-statuses depending on which plugins you add to your Satellite. The build sub-status is relevant for managed hosts and when Satellite runs in unattended mode. The configuration sub-status is only relevant if Satellite uses a configuration management system like Ansible, Puppet, or Salt. To view the sub-status for a host, in the Satellite web UI, navigate to Hosts > All Hosts and click the host whose full status you want to inspect. You can also view substatus information in the hover help for each host. In the Properties table of the host details' page, you can view both the global host status and all sub-statuses. Each sub-status can define its own set of possible values that are mapped to the three global status values. The Build sub-status has two possible values - pending and built that are both mapped to global OK value. The Configuration status has more possible values that map to the global status as follows: sub-statuses that map to the global OK status Active During the last run, some resources were applied. Pending During the last run, some resources would be applied but your configuration management integration was configured to run in noop mode. No changes During the last run, nothing changed. No reports This can be both a Warning or OK sub-status. This occurs when there are no reports but the host uses, for example, an associated configuration management proxy or always_show_configuration_status setting is set to true , it maps to Warning . Sub-status that maps to the global Error status Error This indicates an error during configuration, for example, a run failed to install a package. sub-statuses that map to the global Warning status Out of sync A configuration report was not received within the expected interval, based on the outofsync_interval . Reports are identified by an origin and can have different intervals based upon it. No reports When your host uses a configuration management system but Satellite does not receive reports, it maps to Warning . Otherwise it is mapped to OK. Search syntax If you want to search for hosts according to their sub-status, use the syntax for searching in Satellite that is outlined in the Searching and Bookmarking chapter of the Administering Satellite guide, and then build your searches out using the following status-related examples: You search for hosts' configuration sub-statuses based on their last reported state. For example, to find hosts that have at least one pending resource: To find hosts that restarted some service during last run: To find hosts that have an interesting last run that might indicate something has happened:
[ "global_status = ok", "global_status = error or global_status = warning", "status.pending > 0", "status.restarted > 0", "status.interesting = true" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/host_status_managing-hosts
Chapter 20. Managing local storage using RHEL System Roles
Chapter 20. Managing local storage using RHEL System Roles To manage LVM and local file systems (FS) using Ansible, you can use the storage role, which is one of the RHEL System Roles available in RHEL 8. Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7. For more information about RHEL System Roles and how to apply them, see Introduction to RHEL System Roles . 20.1. Introduction to the storage RHEL System Role The storage role can manage: File systems on disks which have not been partitioned Complete LVM volume groups including their logical volumes and file systems MD RAID volumes and their file systems With the storage role, you can perform the following tasks: Create a file system Remove a file system Mount a file system Unmount a file system Create LVM volume groups Remove LVM volume groups Create logical volumes Remove logical volumes Create RAID volumes Remove RAID volumes Create LVM volume groups with RAID Remove LVM volume groups with RAID Create encrypted LVM volume groups Create LVM logical volumes with RAID 20.2. Parameters that identify a storage device in the storage RHEL System Role Your storage role configuration affects only the file systems, volumes, and pools that you list in the following variables. storage_volumes List of file systems on all unpartitioned disks to be managed. storage_volumes can also include raid volumes. Partitions are currently unsupported. storage_pools List of pools to be managed. Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs). Under each pool there is a list of volumes to be managed by the role. With LVM, each volume corresponds to a logical volume (LV) with a file system. 20.3. Example Ansible playbook to create an XFS file system on a block device This section provides an example Ansible playbook. This playbook applies the storage role to create an XFS file system on a block device using the default parameters. Warning The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition. Example 20.1. A playbook that creates XFS on /dev/sdb The volume name ( barefs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8. To create the file system on an LV, provide the LVM setup under the disks: attribute, including the enclosing volume group. For details, see Example Ansible playbook to manage logical volumes . Do not provide the path to the LV device. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.4. Example Ansible playbook to persistently mount a file system This section provides an example Ansible playbook. This playbook applies the storage role to immediately and persistently mount an XFS file system. Example 20.2. A playbook that mounts a file system on /dev/sdb to /mnt/data This playbook adds the file system to the /etc/fstab file, and mounts the file system immediately. If the file system on the /dev/sdb device or the mount point directory do not exist, the playbook creates them. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.5. Example Ansible playbook to manage logical volumes This section provides an example Ansible playbook. This playbook applies the storage role to create an LVM logical volume in a volume group. Example 20.3. A playbook that creates a mylv logical volume in the myvg volume group The myvg volume group consists of the following disks: /dev/sda /dev/sdb /dev/sdc If the myvg volume group already exists, the playbook adds the logical volume to the volume group. If the myvg volume group does not exist, the playbook creates it. The playbook creates an Ext4 file system on the mylv logical volume, and persistently mounts the file system at /mnt . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.6. Example Ansible playbook to enable online block discard This section provides an example Ansible playbook. This playbook applies the storage role to mount an XFS file system with online block discard enabled. Example 20.4. A playbook that enables online block discard on /mnt/data/ Additional resources Example Ansible playbook to persistently mount a file system The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.7. Example Ansible playbook to create and mount an Ext4 file system This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext4 file system. Example 20.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.8. Example Ansible playbook to create and mount an ext3 file system This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext3 file system. Example 20.6. A playbook that creates Ext3 on /dev/sdb and mounts it at /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.9. Example Ansible playbook to resize an existing Ext4 or Ext3 file system using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage role to resize an existing Ext4 or Ext3 file system on a block device. Example 20.7. A playbook that set up a single volume on a disk If the volume in the example already exists, to resize the volume, you need to run the same playbook, just with a different value for the parameter size . For example: Example 20.8. A playbook that resizes ext4 on /dev/sdb The volume name (barefs in the example) is currently arbitrary. The Storage role identifies the volume by the disk device listed under the disks: attribute. Note Using the Resizing action in other file systems can destroy the data on the device you are working on. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.10. Example Ansible playbook to resize an existing file system on LVM using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to resize an LVM logical volume with a file system. Warning Using the Resizing action in other file systems can destroy the data on the device you are working on. Example 20.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg volume group This playbook resizes the following existing file systems: The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1 , resizes to 10 GiB. The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2 , resizes to 50 GiB. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.11. Example Ansible playbook to create a swap volume using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device using the default parameters. Example 20.10. A playbook that creates or modify an existing XFS on /dev/sdb The volume name ( swap_fs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.12. Configuring a RAID volume using the storage System Role With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file detailing the systems on which you want to deploy a RAID volume using the storage System Role. Procedure Create a new playbook.yml file with the following content: --- - name: Configure the storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present Warning Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, do not use specific disk names in the playbook. Optional: Verify the playbook syntax: Run the playbook: Additional resources Managing RAID The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file Preparing a control node and managed nodes to use RHEL System Roles . 20.13. Configuring an LVM pool with RAID using the storage RHEL System Role With the storage System Role, you can configure an LVM pool with RAID on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file detailing the systems on which you want to configure an LVM pool with RAID using the storage System Role. Procedure Create a new playbook.yml file with the following content: - hosts: all vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_pool size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs state: present roles: - name: rhel-system-roles.storage Note To create an LVM pool with RAID, you must specify the RAID type using the raid_level parameter. Optional. Verify playbook syntax. Run the playbook on your inventory file: Additional resources Managing RAID . The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.14. Example Ansible playbook to compress and deduplicate a VDO volume on LVM using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to enable compression and deduplication of Logical Volumes (LVM) using Virtual Data Optimizer (VDO). Example 20.11. A playbook that creates a mylv1 LVM VDO volume in the myvg volume group In this example, the compression and deduplication pools are set to true, which specifies that the VDO is used. The following describes the usage of these parameters: The deduplication is used to deduplicate the duplicated data stored on the storage volume. The compression is used to compress the data stored on the storage volume, which results in more storage capacity. The vdo_pool_size specifies the actual size the volume takes on the device. The virtual size of VDO volume is set by the size parameter. NOTE: Because of the Storage role use of LVM VDO, only one volume per pool can use the compression and deduplication. 20.15. Creating a LUKS2 encrypted volume using the storage RHEL System Role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites Access and permissions to one or more managed nodes, which are systems you want to configure with the crypto_policies System Role. An inventory file, which lists the managed nodes. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems. On the control node, the ansible-core and rhel-system-roles packages are installed. Important RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible , ansible-playbook , connectors such as docker and podman , and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article. RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article. Procedure Create a new playbook.yml file with the following content: You can also add the other encryption parameters such as encryption_key , encryption_cipher , encryption_key_size , and encryption_luks version in the playbook.yml file. Optional: Verify playbook syntax: Run the playbook on your inventory file: Verification View the encryption status: Verify the created LUKS encrypted volume: View the cryptsetup parameters in the playbook.yml file, which the storage role supports: Additional resources Encrypting block devices using LUKS /usr/share/ansible/roles/rhel-system-roles.storage/README.md file 20.16. Example Ansible playbook to express pool volume sizes as percentage using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage System Role to enable you to express Logical Manager Volumes (LVM) volume sizes as a percentage of the pool's total size. Example 20.12. A playbook that express volume sizes as a percentage of the pool's total size This example specifies the size of LVM volumes as a percentage of the pool size, for example: "60%". Additionally, you can also specify the size of LVM volumes as a percentage of the pool size in a human-readable size of the file system, for example, "10g" or "50 GiB". 20.17. Additional resources /usr/share/doc/rhel-system-roles/storage/ /usr/share/ansible/roles/rhel-system-roles.storage/
[ "--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs roles: - rhel-system-roles.storage", "--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data roles: - rhel-system-roles.storage", "- hosts: all vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data roles: - rhel-system-roles.storage", "--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard roles: - rhel-system-roles.storage", "--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage", "--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage", "--- - name: Create a disk device mounted on /opt/barefs - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - /dev/sdb size: 12 GiB fs_type: ext4 mount_point: /opt/barefs roles: - rhel-system-roles.storage", "--- - name: Create a disk device mounted on /opt/barefs - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - /dev/sdb size: 10 GiB fs_type: ext4 mount_point: /opt/barefs roles: - rhel-system-roles.storage", "--- - hosts: all vars: storage_pools: - name: myvg disks: - /dev/sda - /dev/sdb - /dev/sdc volumes: - name: mylv1 size: 10 GiB fs_type: ext4 mount_point: /opt/mount1 - name: mylv2 size: 50 GiB fs_type: ext4 mount_point: /opt/mount2 - name: Create LVM pool over three disks include_role: name: rhel-system-roles.storage", "--- - name: Create a disk device with swap - hosts: all vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap roles: - rhel-system-roles.storage", "--- - name: Configure the storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory.file /path/to/file/playbook.yml", "- hosts: all vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_pool size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present roles: - name: rhel-system-roles.storage", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory.file /path/to/file/playbook.yml", "--- - name: Create LVM VDO volume under volume group 'myvg' hosts: all roles: -rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared", "- hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true encryption_password: your-password roles: - rhel-system-roles.storage", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory.file /path/to/file/playbook.yml", "cryptsetup status sdb /dev/mapper/ sdb is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ sdb [...]", "cryptsetup luksDump /dev/ sdb Version: 2 Epoch: 6 Metadata area: 16384 [bytes] Keyslots area: 33521664 [bytes] UUID: a4c6be82-7347-4a91-a8ad-9479b72c9426 Label: (no label) Subsystem: (no subsystem) Flags: allow-discards Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 4096 [bytes] [...]", "cat ~/playbook.yml - hosts: all vars: storage_volumes: - name: foo type: disk disks: - nvme0n1 fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true #encryption_password: passwdpasswd encryption_key: /home/passwd_key encryption_cipher: aes-xts-plain64 encryption_key_size: 512 encryption_luks_version: luks2 roles: - rhel-system-roles.storage", "--- - name: Express volume sizes as a percentage of the pool's total size hosts: all roles - rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: data size: 60% mount_point: /opt/mount/data - name: web size: 30% mount_point: /opt/mount/web - name: cache size: 10% mount_point: /opt/cache/mount" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/managing-local-storage-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles
Data Security and Hardening Guide
Data Security and Hardening Guide Red Hat Ceph Storage 5 Red Hat Ceph Storage Data Security and Hardening Guide Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/data_security_and_hardening_guide/index
3.4. Customizing the Schema
3.4. Customizing the Schema The standard schema can be extended if it is too limited for the directory needs. The web console in Directory Server can be used to extend the schema by easily adding attributes and object classes. It is also possible to create an LDIF file and add schema elements manually. For more information, see the Red Hat Directory Server Administration Guide . Keep the following rules in mind when customizing the Directory Server schema: Keep the schema as simple as possible. Reuse existing schema elements whenever possible. Minimize the number of mandatory attributes defined for each object class. Do not define more than one object class or attribute for the same purpose (data). Do not modify any existing definitions of attributes or object classes. Note When customizing the schema, never delete or replace the standard schema. Doing so can lead to compatibility problems with other directories or other LDAP client applications. Custom object classes and attributes are defined in the 99user.ldif file. Each individual instance maintains its own 99user.ldif file in the /etc/dirsrv/slapd- instance_name /schema/ directory. It is also possible to create custom schema files and dynamically reload the schema into the server. 3.4.1. When to Extend the Schema While the object classes and attributes supplied with the Directory Server should meet most common corporate needs, a given object class may not store specialized information about an organization. Also, the schema may need extended to support the object classes and attributes required by an LDAP-enabled application's unique data needs. 3.4.2. Getting and Assigning Object Identifiers Each LDAP object class or attribute must be assigned a unique name and object identifier (OID). When a schema is defined, the elements require a base OID which is unique to your organization. One OID is enough to meet all schema needs. Simply add another level of hierarchy to create new branches for attributes and object classes. Getting and assigning OIDs in schema involves the following steps: Obtain an OID from the Internet Assigned Numbers Authority (IANA) or a national organization. In some countries, corporations already have OIDs assigned to them. If your organization does not already have an OID, one can be obtained from IANA. For more information, go to the IANA website at http://www.iana.org/cgi-bin/enterprise.pl . Create an OID registry to track OID assignments. An OID registry is a list of the OIDs and descriptions of the OIDs used in the directory schema. This ensures that no OID is ever used for more than one purpose. Then publish the OID registry with the schema. Create branches in the OID tree to accommodate schema elements. Create at least two branches under the OID branch or the directory schema, using OID .1 for attributes and OID .2 for object classes. To define custom matching rules or controls, add new branches as needed ( OID .3 , for example). 3.4.3. Naming Attributes and Object Classes When creating names for new attributes and object classes, make the names as meaningful as possible. This makes the schema easier to use for Directory Server administrators. Avoid naming collisions between schema elements and existing schema elements by including a unique prefix on all schema elements. For example, Example Corp. might add the prefix example before each of their custom schema elements. They might add a special object class called examplePerson to identify Example Corp. employees in their directory. 3.4.4. Strategies for Defining New Object Classes There are two ways to create new object classes: Create many new object classes, one for each object class structure to which to add an attribute. Create a single object class that supports all of the custom attributes created for the directory. This kind of object class is created by defining it as an auxiliary object class. It may be easiest to mix the two methods. For example, suppose an administrator wants to create the attributes exampleDateOfBirth , examplePreferredOS , exampleBuildingFloor , and exampleVicePresident . A simple solution is to create several object classes that allow some subset of these attributes. One object class, examplePerson , is created and allows exampleDateOfBirth and examplePreferredOS . The parent of examplePerson is inetOrgPerson . A second object class, exampleOrganization , allows exampleBuildingFloor and exampleVicePresident . The parent of exampleOrganization is the organization object class. The new object classes appear in LDAPv3 schema format as follows: Alternatively, create a single object class that allows all of these attributes and use it with any entry which needs these attributes. The single object class appears as follows: The new exampleEntry object class is marked AUXILIARY , meaning that it can be used with any entry regardless of its structural object class. Note The OID of the new object classes in the example ( 2.16.840.1.117370 ) is based on the former Netscape OID prefix. To create custom object classes, obtain an OID as described in Section 3.4.2, "Getting and Assigning Object Identifiers" . There are several different ways to organize new object classes, depending on the organization environment. Consider the following when deciding how to implement new object classes: Multiple object classes result in more schema elements to create and maintain. Generally, the number of elements remains small and needs little maintenance. However, it may be easier to use a single object class if there are more than two or three object classes added to the schema. Multiple object classes require a more careful and rigid data design. Rigid data design forces attention to the object class structure under which every piece of data is placed, which can be either helpful or cumbersome. Single object classes simplify data design when there is data that can be applied to more than one type of object class, such as both people and asset entries. For example, a custom preferredOS attribute may be set on both a person and a group entry. A single object class can allow this attribute on both types of entries. Avoid required attributes for new object classes. Specifying require instead of allow for attributes in new object classes can make the schema inflexible. When creating a new object class, use allow rather than require as much as possible. After defining a new object class, decide what attributes it allows and requires, and from what object classes it inherits attributes. 3.4.5. Strategies for Defining New Attributes For both application compatibility and long-term maintenance, try to use standard attributes whenever possible. Search the attributes that already exist in the default directory schema and use them in association with a new object class or check out the Directory Server Schema Guide . However, if the standard schema does not contain all the information you need, then add new attributes and new object classes. For example, a person entry may need more attributes than the person , organizationalPerson , or inetOrgPerson object classes support by default. As an example, no attribute exists within the standard Directory Server schema to store birth dates. A new attribute, dateOfBirth , can be created and set as an allowed attribute within a new auxiliary object class, examplePerson . One important thing to remember: Never add or delete custom attributes to standard schema elements. If the directory requires custom attributes, add custom object classes to contain them. 3.4.6. Deleting Schema Elements Do not delete the schema elements included by default with Directory Server. Unused schema elements represent no operational or administrative overhead. Deleting parts of the standard LDAP schema can cause compatibility problems with future installations of Directory Server and other directory-enabled applications. However, unused custom schema elements can be deleted. Before removing the object class definitions from the schema, modify each entry using the object class. Removing the definition first might prevent the entries that use the object class from being modified later. Schema checks on modified entries also fails unless the unknown object class values are removed from the entry. 3.4.7. Creating Custom Schema Files Administrators can create custom schema files for the Directory Server to use, in addition to the 99user.ldif file provided with Directory Server. These schema files hold new, custom attributes and object classes that are specific to the organization. The new schema files should be located in the schema directory, /etc/dirsrv/slapd- instance_name /schema/ . All standard attributes and object classes are loaded only after custom schema elements have been loaded. Note Custom schema files should not be numerically or alphabetically higher than 99user.ldif or the server could experience problems. After creating custom schema files, there are two ways for the schema changes to be distributed among all servers: Manually copy these custom schema files to the instance's schema directory, /etc/dirsrv/slapd- instance /schema . To load the schema, restart the server or reload the schema dynamically by running the schema-reload.pl script. Modify the schema on the server with an LDAP client such as the web console or ldapmodify . If the server is replicated, then allow the replication process to copy the schema information to each of the consumer servers. With replication, all of the replicated schema elements are copied into the consumer servers' 99user.ldif file. To keep the schema in a custom schema file, like 90example_schema.ldif , the file has to be copied over to the consumer server manually. Replication does not copy schema files. If these custom schema files are not copied to all of the servers, the schema information are only replicated to the replica (consumer server) when changes are made to the schema on the supplier server using an LDAP client such as the web console or ldapmodify . When the schema definitions are replicated to a consumer server where they do not already exist, they are stored in the 99user.ldif file. The directory does not track where schema definitions are stored. Storing schema elements in the 99user.ldif file of consumers does not create a problem as long as the schema is maintained on the supplier server only. If the custom schema files are copied to each server, changes to the schema files must be copied again to each server. If the files are not copied over again, it is possible the changes will be replicated and stored in the 99user.ldif file on the consumer. Having the changes in the 99user.ldif file may make schema management difficult, as some attributes will appear in two separate schema files on a consumer, once in the original custom schema file copied from the supplier and again in the 99user.ldif file after replication. For more information about replicating schema, see Section 7.4.4, "Schema Replication" . 3.4.8. Custom Schema Best Practices When using schema files, be sure to create schema which will be compatible and easy to manage. 3.4.8.1. Naming Schema Files When naming custom schema files, use the following naming format: Name custom schema files lower (numerically and alphabetically) than 99user.ldif . This lets Directory Server write to 99user.ldif , both through LDAP tools and the web console. The 99user.ldif file contains attributes with an X-ORIGIN value of 'user defined' ; however, the Directory Server writes all 'user defined' schema elements to the highest named file, numerically then alphabetically. If there is a schema file called 99zzz.ldif , the time the schema is updated (either through LDAP command-line tools or the web console) all of the attributes with an X-ORIGIN value of 'user defined' are written to 99zzz.ldif . The result is two LDIF files that contain duplicate information, and some information in the 99zzz.ldif file might be erased. 3.4.8.2. Using 'user defined' as the Origin Do not use 'user defined' in the X-ORIGIN field of custom schema files (such as 60example.ldif ), because 'user defined' is used internally by the Directory Server when a schema is added over LDAP. In custom schema files, use something more descriptive, such as 'Example Corp. defined' . However, if the custom schema elements are added directly to the 99user.ldif manually, use 'user defined' as the value of X-ORIGIN . If a different X-ORIGIN value is set, the server simply may overwrite it. Using an X-ORIGIN of value 'user defined' ensures that schema definitions in the 99user.ldif file are not removed from the file by the Directory Server. The Directory Server does not remove them because it relies on an X-ORIGIN of value 'user defined' to tell it what elements should reside in the 99user.ldif file. For example: After the Directory Server loads the schema entry, it appears as follows: 3.4.8.3. Defining Attributes before Object Classes When adding new schema elements, all attributes need to be defined before they can be used in an object class. Attributes and object classes can be defined in the same schema file. 3.4.8.4. Defining Schema in a Single File Each custom attribute or object class should be defined in only one schema file. This prevents the server from overriding any definitions when it loads the most recently created schema (as the server loads the schema in numerical order first, then alphabetical order). Decide how to keep from having schema in duplicate files: Be careful with what schema elements are included in each schema file. Be careful in naming and updating the schema files. When schema elements are edited through LDAP tools, the changes are automatically written to the last file (alphabetically). Most schema changes, then, write to the default file 99user.ldif and not to the custom schema file, such as 60example.ldif . Also, the schema elements in 99user.ldif override duplicate elements in other schema files. Add all the schema definitions to the 99user.ldif file. This is useful if your are managing the schema through the web console.
[ "objectclasses: ( 2.16.840.1.117370.999.1.2.3 NAME 'examplePerson' DESC 'Example Person Object Class' SUP inetorgPerson MAY (exampleDateOfBirth USD examplePreferredOS) ) objectclasses: ( 2.16.840.1.117370.999.1.2.4 NAME 'exampleOrganization' DESC 'Organization Object Class' SUP organization MAY (exampleBuildingFloor USD exampleVicePresident) )", "objectclasses: (2.16.840.1.117370.999.1.2.5 NAME 'exampleEntry' DESC 'Standard Entry Object Class' SUP top AUXILIARY MAY (exampleDateOfBirth USD examplePreferredOS USD exampleBuildingFloor USD exampleVicePresident) )", "attributetypes: ( dateofbirth-oid NAME 'dateofbirth' DESC 'For employee birthdays' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Example defined') objectclasses: ( 2.16.840.1.117370.999.1.2.3 NAME 'examplePerson' DESC 'Example Person Object Class' SUP inetorgPerson MAY (exampleDateOfBirth USD cn) X-ORIGIN 'Example defined')", "[00-99] yourName .ldif", "attributetypes: ( exampleContact-oid NAME 'exampleContact' DESC 'Example Corporate contact' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN 'Example defined')", "attributetypes: ( exampleContact-oid NAME 'exampleContact' DESC 'Example Corporate contact' SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 X-ORIGIN ('Example defined' 'user defined') )" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_schema-customizing_the_schema
Chapter 3. Group Management
Chapter 3. Group Management 3.1. Manage Keystone Groups 3.1.1. Using the Command-line You can use Identity Service (keystone) groups to assign consistent permissions to multiple user accounts. This example creates a group and then assigns permissions to the group. As a result, members of the group will inherit the same permissions that were assigned to the group: Note The openstack group subcommands require keystone v3 . Create the group grp-Auditors : View a list of keystone groups: Grant the grp-Auditors group permission to access the demo project, while using the _member_ role: Add the existing user user1 to the grp-Auditors group: Confirm that user1 is a member of grp-Auditors : Review the effective permissions that have been assigned to user1 : 3.1.2. Using Dashboard You can use the dashboard to manage the membership of keystone groups. You will need to use the command-line to assign role permissions to a group, as covered in the example. 3.1.2.1. Create a Group As an admin user in the dashboard, select Identity > Groups . Click +Create Group . Enter a name and description for the group. Click Create Group . 3.1.2.2. Manage Group Membership You can use the dashboard to manage the membership of keystone groups. As an admin user in the dashboard, select Identity > Groups . Click Manage Members for the group you need to edit. Use Add users to add a user to the group. If you need to remove a user, mark its checkbox and click or Remove users .
[ "openstack group create grp-Auditors +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | default | | id | 2a4856fc242142a4aa7c02d28edfdfff | | name | grp-Auditors | +-------------+----------------------------------+", "openstack group list --long +----------------------------------+--------------+-----------+-------------+ | ID | Name | Domain ID | Description | +----------------------------------+--------------+-----------+-------------+ | 2a4856fc242142a4aa7c02d28edfdfff | grp-Auditors | default | | +----------------------------------+--------------+-----------+-------------+", "openstack role add _member_ --group grp-Auditors --project demo", "openstack group add user grp-Auditors user1 user1 added to group grp-Auditors", "openstack group contains user grp-Auditors user1 user1 in group grp-Auditors", "openstack role assignment list --effective --user user1 +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 9fe2ff9ee4384b1894a90878d3e92bab | 3fefe5b4f6c948e6959d1feaef4822f2 | | 0ce36252e2fb4ea8983bed2a568fa832 | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/users_and_identity_management_guide/group_management
Chapter 5. ConsoleNotification [console.openshift.io/v1]
Chapter 5. ConsoleNotification [console.openshift.io/v1] Description ConsoleNotification is the extension for configuring openshift web console notifications. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleNotificationSpec is the desired console notification configuration. 5.1.1. .spec Description ConsoleNotificationSpec is the desired console notification configuration. Type object Required text Property Type Description backgroundColor string backgroundColor is the color of the background for the notification as CSS data type color. color string color is the color of the text for the notification as CSS data type color. link object link is an object that holds notification link details. location string location is the location of the notification in the console. Valid values are: "BannerTop", "BannerBottom", "BannerTopBottom". text string text is the visible text of the notification. 5.1.2. .spec.link Description link is an object that holds notification link details. Type object Required href text Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 5.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolenotifications DELETE : delete collection of ConsoleNotification GET : list objects of kind ConsoleNotification POST : create a ConsoleNotification /apis/console.openshift.io/v1/consolenotifications/{name} DELETE : delete a ConsoleNotification GET : read the specified ConsoleNotification PATCH : partially update the specified ConsoleNotification PUT : replace the specified ConsoleNotification /apis/console.openshift.io/v1/consolenotifications/{name}/status GET : read status of the specified ConsoleNotification PATCH : partially update status of the specified ConsoleNotification PUT : replace status of the specified ConsoleNotification 5.2.1. /apis/console.openshift.io/v1/consolenotifications HTTP method DELETE Description delete collection of ConsoleNotification Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleNotification Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotificationList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleNotification Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 202 - Accepted ConsoleNotification schema 401 - Unauthorized Empty 5.2.2. /apis/console.openshift.io/v1/consolenotifications/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the ConsoleNotification HTTP method DELETE Description delete a ConsoleNotification Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleNotification Table 5.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleNotification Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleNotification Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 401 - Unauthorized Empty 5.2.3. /apis/console.openshift.io/v1/consolenotifications/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the ConsoleNotification HTTP method GET Description read status of the specified ConsoleNotification Table 5.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleNotification Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleNotification Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/console_apis/consolenotification-console-openshift-io-v1
30.2.3. Configuring for Power Systems Servers
30.2.3. Configuring for Power Systems Servers If tftp-server is not yet installed, run yum install tftp-server . In the tftp-server config file at /etc/xinetd.d/tftp , change the disabled parameter from yes to no . Configure your DHCP server to use the boot images packaged with yaboot . (If you do not have a DHCP server installed, refer to the DHCP Servers chapter in the Red Hat Enterprise Linux Deployment Guide .) A sample configuration in /etc/dhcp/dhcpd.conf might look like: You now need the yaboot binary file from the yaboot package in the ISO image file. To access it, run the following commands as root: Extract the package: Create a yaboot directory within tftpboot and copy the yaboot binary file into it: Add a config file named yaboot.conf to this directory. A sample config file might look like: For instructions on how to specify the installation source, refer to Section 7.1.3, "Additional Boot Options" Copy the boot images from the extracted ISO into your tftp root directory: Clean up by removing the yaboot-unpack directory and unmounting the ISO: Boot the client system, and select the network device as your boot device when prompted.
[ "host bonn { filename \"yaboot\"; next-server 10.32.5.1; hardware ethernet 00:0e:91:51:6a:26; fixed-address 10.32.5.144; }", "mkdir /publicly_available_directory/yaboot-unpack mount -t iso9660 / path_to_image/name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /Packages/yaboot- version .ppc.rpm / publicly_available_directory /yaboot-unpack", "cd / publicly_available_directory /yaboot-unpack rpm2cpio yaboot- version .ppc.rpm | cpio -dimv", "mkdir /var/lib/tftpboot/yaboot cp publicly_available_directory /yaboot-unpack/usr/lib/yaboot/yaboot /var/lib/tftpboot/yaboot", "init-message = \"\\nWelcome to the Red Hat Enterprise Linux 6 installer!\\n\\n\" timeout=60 default=rhel6 image=/rhel6/vmlinuz-RHEL6 label=linux alias=rhel6 initrd=/rhel6/initrd-RHEL6.img append=\"repo=http://10.32.5.1/mnt/archive/redhat/released/RHEL-6/6.x/Server/ppc64/os/\" read-only", "cp /mount_point/images/ppc/ppc64/vmlinuz /var/lib/tftpboot/yaboot/rhel6/vmlinuz-RHEL6 cp /mount_point/images/ppc/ppc64/initrd.img /var/lib/tftpboot/yaboot/rhel6/initrd-RHEL6.img", "rm -rf / publicly_available_directory /yaboot-unpack umount / mount_point" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-netboot-pxe-config-ppc
25.14. Configuring iSCSI Offload and Interface Binding
25.14. Configuring iSCSI Offload and Interface Binding This chapter describes how to set up iSCSI interfaces in order to bind a session to a NIC port when using software iSCSI. It also describes how to set up interfaces for use with network devices that support offloading. The network subsystem can be configured to determine the path/NIC that iSCSI interfaces should use for binding. For example, if portals and NICs are set up on different subnets, then it is not necessary to manually configure iSCSI interfaces for binding. Before attempting to configure an iSCSI interface for binding, run the following command first: If ping fails, then you will not be able to bind a session to a NIC. If this is the case, check the network settings first. 25.14.1. Viewing Available iface Configurations iSCSI offload and interface binding is supported for the following iSCSI initiator implementations: Software iSCSI This stack allocates an iSCSI host instance (that is, scsi_host ) per session, with a single connection per session. As a result, /sys/class_scsi_host and /proc/scsi will report a scsi_host for each connection/session you are logged into. Offload iSCSI This stack allocates a scsi_host for each PCI device. As such, each port on a host bus adapter will show up as a different PCI device, with a different scsi_host per HBA port. To manage both types of initiator implementations, iscsiadm uses the iface structure. With this structure, an iface configuration must be entered in /var/lib/iscsi/ifaces for each HBA port, software iSCSI, or network device ( eth X ) used to bind sessions. To view available iface configurations, run iscsiadm -m iface . This will display iface information in the following format: Refer to the following table for an explanation of each value/setting. Table 25.2. iface Settings Setting Description iface_name iface configuration name. transport_name Name of driver hardware_address MAC address ip_address IP address to use for this port net_iface_name Name used for the vlan or alias binding of a software iSCSI session. For iSCSI offloads, net_iface_name will be <empty> because this value is not persistent across reboots. initiator_name This setting is used to override a default name for the initiator, which is defined in /etc/iscsi/initiatorname.iscsi Example 25.6. Sample Output of the iscsiadm -m iface Command The following is a sample output of the iscsiadm -m iface command: For software iSCSI, each iface configuration must have a unique name (with less than 65 characters). The iface_name for network devices that support offloading appears in the format transport_name . hardware_name . Example 25.7. iscsiadm -m iface Output with a Chelsio Network Card For example, the sample output of iscsiadm -m iface on a system using a Chelsio network card might appear as: It is also possible to display the settings of a specific iface configuration in a more friendly way. To do so, use the option -I iface_name . This will display the settings in the following format: Example 25.8. Using iface Settings with a Chelsio Converged Network Adapter Using the example, the iface settings of the same Chelsio converged network adapter (i.e. iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 ) would appear as: 25.14.2. Configuring an iface for Software iSCSI As mentioned earlier, an iface configuration is required for each network object that will be used to bind a session. Before To create an iface configuration for software iSCSI, run the following command: This will create a new empty iface configuration with a specified iface_name . If an existing iface configuration already has the same iface_name , then it will be overwritten with a new, empty one. To configure a specific setting of an iface configuration, use the following command: Example 25.9. Set MAC Address of iface0 For example, to set the MAC address ( hardware_address ) of iface0 to 00:0F:1F:92:6B:BF , run: Warning Do not use default or iser as iface names. Both strings are special values used by iscsiadm for backward compatibility. Any manually-created iface configurations named default or iser will disable backwards compatibility. 25.14.3. Configuring an iface for iSCSI Offload By default, iscsiadm creates an iface configuration for each port. To view available iface configurations, use the same command for doing so in software iSCSI: iscsiadm -m iface . Before using the iface of a network card for iSCSI offload, first set the iface.ipaddress value of the offload interface to the initiator IP address that the interface should use: For devices that use the be2iscsi driver, the IP address is configured in the BIOS setup screen. For all other devices, to configure the IP address of the iface , use: Example 25.10. Set the iface IP Address of a Chelsio Card For example, to set the iface IP address to 20.15.0.66 when using a card with the iface name of cxgb3i.00:07:43:05:97:07 , use: 25.14.4. Binding/Unbinding an iface to a Portal Whenever iscsiadm is used to scan for interconnects, it will first check the iface.transport settings of each iface configuration in /var/lib/iscsi/ifaces . The iscsiadm utility will then bind discovered portals to any iface whose iface.transport is tcp . This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to specify which portal to bind to an iface , as in: By default, the iscsiadm utility will not automatically bind any portals to iface configurations that use offloading. This is because such iface configurations will not have iface.transport set to tcp . As such, the iface configurations need to be manually bound to discovered portals. It is also possible to prevent a portal from binding to any existing iface . To do so, use default as the iface_name , as in: To remove the binding between a target and iface , use: To delete all bindings for a specific iface , use: To delete bindings for a specific portal (e.g. for Equalogic targets), use: Note If there are no iface configurations defined in /var/lib/iscsi/iface and the -I option is not used, iscsiadm will allow the network subsystem to decide which device a specific portal should use. [6] Refer to Section 25.15, "Scanning iSCSI Interconnects" for information on proper_target_name .
[ "ping -I eth X target_IP", "iface_name transport_name , hardware_address , ip_address , net_ifacename , initiator_name", "iface0 qla4xxx,00:c0:dd:08:63:e8,20.15.0.7,default,iqn.2005-06.com.redhat:madmax iface1 qla4xxx,00:c0:dd:08:63:ea,20.15.0.9,default,iqn.2005-06.com.redhat:madmax", "default tcp,<empty>,<empty>,<empty>,<empty> iser iser,<empty>,<empty>,<empty>,<empty> cxgb3i.00:07:43:05:97:07 cxgb3i,00:07:43:05:97:07,<empty>,<empty>,<empty>", "iface. setting = value", "BEGIN RECORD 2.0-871 iface.iscsi_ifacename = cxgb3i.00:07:43:05:97:07 iface.net_ifacename = <empty> iface.ipaddress = <empty> iface.hwaddress = 00:07:43:05:97:07 iface.transport_name = cxgb3i iface.initiatorname = <empty> END RECORD", "iscsiadm -m iface -I iface_name --op=new", "iscsiadm -m iface -I iface_name --op=update -n iface. setting -v hw_address", "iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF", "iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v initiator_ip_address", "iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n iface.ipaddress -v 20.15.0.66", "iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1 [5]", "iscsiadm -m discovery -t st -p IP:port -I default -P 1", "iscsiadm -m node -targetname proper_target_name -I iface0 --op=delete [6]", "iscsiadm -m node -I iface_name --op=delete", "iscsiadm -m node -p IP:port -I iface_name --op=delete" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/iscsi-offload-config
Chapter 4. Connecting VM instances to physical networks
Chapter 4. Connecting VM instances to physical networks You can directly connect your VM instances to an external network using flat and VLAN provider networks. 4.1. Overview of the OpenStack Networking topology OpenStack Networking (neutron) has two categories of services distributed across a number of node types. Neutron server - This service runs the OpenStack Networking API server, which provides the API for end-users and services to interact with OpenStack Networking. This server also integrates with the underlying database to store and retrieve project network, router, and loadbalancer details, among others. Neutron agents - These are the services that perform the network functions for OpenStack Networking: neutron-dhcp-agent - manages DHCP IP addressing for project private networks. neutron-l3-agent - performs layer 3 routing between project private networks, the external network, and others. Compute node - This node hosts the hypervisor that runs the virtual machines, also known as instances. A Compute node must be wired directly to the network in order to provide external connectivity for instances. This node is typically where the l2 agents run, such as neutron-openvswitch-agent . Additional resources Section 4.2, "Placement of OpenStack Networking services" 4.2. Placement of OpenStack Networking services The OpenStack Networking services can either run together on the same physical server, or on separate dedicated servers, which are named according to their roles: Controller node - The server that runs API service. Network node - The server that runs the OpenStack Networking agents. Compute node - The hypervisor server that hosts the instances. The steps in this chapter apply to an environment that contains these three node types. If your deployment has both the Controller and Network node roles on the same physical node, then you must perform the steps from both sections on that server. This also applies for a High Availability (HA) environment, where all three nodes might be running the Controller node and Network node services with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes on all three nodes. Additional resources Section 4.1, "Overview of the OpenStack Networking topology" 4.3. Configuring flat provider networks You can use flat provider networks to connect instances directly to the external network. This is useful if you have multiple physical networks and separate physical interfaces, and intend to connect each Compute and Network node to those external networks. Prerequisites You have multiple physical networks. This example uses physical networks called physnet1 , and physnet2 , respectively. You have separate physical interfaces. This example uses separate physical interfaces, eth0 and eth1 , respectively. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your orchestration templates. In the YAML environment file under parameter_defaults , use the NeutronBridgeMappings to specify which OVS bridges are used for accessing external networks. Example In the custom NIC configuration template for the Controller and Compute nodes, configure the bridges with interfaces attached. Example Run the openstack overcloud deploy command and include the templates and the environment files, including this modified custom NIC template and the new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Create an external network ( public1 ) as a flat network and associate it with the configured physical network ( physnet1 ). Configure it as a shared network (using --share ) to let other users create VM instances that connect to the external network directly. Example Create a subnet ( public_subnet ) using the openstack subnet create command. Example Create a VM instance and connect it directly to the newly-created external network. Example Additional resources Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide network create in the Command Line Interface Reference subnet create in the Command Line Interface Reference server create in the Command Line Interface Reference 4.4. How does the flat provider network packet flow work? This section describes in detail how traffic flows to and from an instance with flat provider network configuration. The flow of outgoing traffic in a flat provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly at an external network. After you configure the br-ex external bridge, add the physical interface to the bridge, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram (if using the iptables_hybrid firewall driver): Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx . Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx . This is because the bridge is used to apply the inbound/outbound firewall rules defined by the security group. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int Open vSwitch (OVS) bridge. An example configuration of `qbr-xx`Linux bridge: The configuration of qvo-xx on br-int : Note Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider network. In this example, the VLAN tag is 5 . When the packet reaches qvo-xx , the VLAN tag is appended to the packet header. The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex . Example configuration of the patch-peer on br-int : Example configuration of the patch-peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex strips the VLAN tag (5) and forwards it to the physical interface. In the following example, the output shows the port number of phy-br-ex as 2 . The following output shows any packet that arrives on phy-br-ex ( in_port=2 ) with a VLAN tag of 5 ( dl_vlan=5 ). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the physical interface. If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to the packet. The flow of incoming traffic in a flat provider network This section contains information about the flow of incoming traffic from the external network until it arrives at the interface of the instance. Incoming traffic arrives at eth1 on the physical node. The packet passes to the br-ex bridge. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex . In the following example, int-br-ex uses port number 15 . See the entry containing 15(int-br-ex) : Observing the traffic flow on br-int When the packet arrives at int-br-ex , an OVS flow rule within the br-int bridge amends the packet to add the internal VLAN tag 5 . See the entry for actions=mod_vlan_vid:5 : The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag (vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet ( actions=mod_vlan_vid:5,NORMAL ) and forwards it to qvoxxx . qvoxxx accepts the packet and forwards it to qvbxx , after stripping away the VLAN tag. The packet then reaches the instance. Note VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat provider network; this value was assigned automatically by neutron-openvswitch-agent . This value may be different for your own flat provider network, and can differ for the same network on two separate Compute nodes. Additional resources Section 4.5, "Troubleshooting instance-physical network connections on flat provider networks" 4.5. Troubleshooting instance-physical network connections on flat provider networks The output provided in "How does the flat provider network packet flow work?" provides sufficient debugging information for troubleshooting a flat provider network, should anything go wrong. The following steps contain further information about the troubleshooting process. Procedure Review bridge_mappings . Verify that the physical network name you use is consistent with the contents of the bridge_mapping configuration. Example In this example, the physical network name is, physnet1 . Sample output Example In this example, the contents of the bridge_mapping configuration is also, physnet1 : Sample output Review the network configuration. Confirm that the network is created as external , and uses the flat type: Example In this example, details about the network, provider-flat , is queried: Sample output Review the patch-peer. Verify that br-int and br-ex are connected using a patch-peer int-br-ex <--> phy-br-ex . Sample output Sample output Configuration of the patch-peer on br-ex : This connection is created when you restart the neutron-openvswitch-agent service, if bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Re-check the bridge_mapping setting if the connection is not created after you restart the service. Review the network flows. Run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int , and review whether the flows strip the internal VLAN IDs for outgoing packets, and add VLAN IDs for incoming packets. This flow is first added when you spawn an instance to this network on a specific Compute node. If this flow is not created after spawning the instance, verify that the network is created as flat , is external , and that the physical_network name is correct. In addition, review the bridge_mapping settings. Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a port within br-ex , and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a . Sample output The following output shows eth1 is a port in br-ex : Example The following example demonstrates that eth1 is configured as an OVS port, and that the kernel knows to transfer all packets from the interface, and send them to the OVS bridge br-ex . This can be observed in the entry, master ovs-system . Additional resources Section 4.4, "How does the flat provider network packet flow work?" Configuring bridge mappings 4.6. Configuring VLAN provider networks When you connect multiple VLAN-tagged interfaces on a single NIC to multiple provider networks, these new VLAN provider networks can connect VM instances directly to external networks. Prerequisites You have a physical network, with a range of VLANs. This example uses a physical network called physnet1 , with a range of VLANs, 171-172 . Your Network nodes and Compute nodes are connected to a physical network using a physical interface. This example uses Network nodes and Compute nodes that are connected to a physical network, physnet1 , using a physical interface, eth1 . The switch ports that these interfaces connect to must be configured to trunk the required VLAN ranges. Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file , which is a special type of template that provides customization for your orchestration templates. In the YAML environment file under parameter_defaults , use NeutronTypeDrivers to specify your network type drivers. Example Configure the NeutronNetworkVLANRanges setting to reflect the physical network and VLAN ranges in use: Example Create an external network bridge ( br-ex ), and associate a port ( eth1 ) with it. This example configures eth1 to use br-ex : Example Run the openstack overcloud deploy command and include the core templates and the environment files, including this new environment file. Important The order of the environment files is important because the parameters and resources defined in subsequent environment files take precedence. Example Verification Create the external networks as type vlan , and associate them with the configured physical_network . Run the following example command to create two networks: one for VLAN 171, and another for VLAN 172: Example Create a number of subnets and configure them to use the external network. You can use either openstack subnet create or the dashboard to create these subnets. Ensure that the external subnet details you have received from your network administrator are correctly associated with each VLAN. In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24 : Example Additional resources Custom network interface templates in the Director Installation and Usage guide Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide network create in the Command Line Interface Reference subnet create in the Command Line Interface Reference 4.7. How does the VLAN provider network packet flow work? This section describes in detail how traffic flows to and from an instance with VLAN provider network configuration. The flow of outgoing traffic in a VLAN provider network The following diagram describes the packet flow for traffic leaving an instance and arriving directly to a VLAN provider external network. This example uses two instances attached to the two VLAN networks (171 and 172). After you configure br-ex , add a physical interface to it, and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the following diagram: Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to the instance. qbr-xx is connected to br-int using veth pair qvbxx <-> qvoxxx . qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch bridge br-int . Example configuration of qbr-xx on the Linux bridge. This example features two instances and two corresponding linux bridges: The configuration of qvoxx on br-int : qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171 and VLAN tag 3 is associated with VLAN provider network provider-172 . When the packet reaches qvoxx , the this VLAN tag is added to the packet header. The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <-> phy-br-ex . Example patch-peer on br-int : Example configuration of the patch peer on br-ex : When this packet reaches phy-br-ex on br-ex , an OVS flow inside br-ex replaces the internal VLAN tag with the actual VLAN tag associated with the VLAN provider network. The output of the following command shows that the port number of phy-br-ex is 4 : The following command shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 2 ( dl_vlan=2 ). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL ) and forwards the packet to the physical interface. The command also shows any packet that arrives on phy-br-ex ( in_port=4 ) which has VLAN tag 3 ( dl_vlan=3 ). Open vSwitch replaces the VLAN tag with 172 ( actions=mod_vlan_vid:172,NORMAL ) and forwards the packet to the physical interface. The neutron-openvswitch-agent adds these rules. This packet is then forwarded to physical interface eth1 . The flow of incoming traffic in a VLAN provider network The following example flow was tested on a Compute node using VLAN tag 2 for provider network provider-171 and VLAN tag 3 for provider network provider-172. The flow uses port 18 on the integration bridge br-int. Your VLAN provider network may require a different configuration. Also, the configuration requirement for a network may differ between two different Compute nodes. The output of the following command shows int-br-ex with port number 18: The output of the following command shows the flow rules on br-int. Incoming flow example This example demonstrates the the following br-int OVS flow: A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the physical node. The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex . The packet matches the flow's criteria ( in_port=18,dl_vlan=172 ). The flow actions ( actions=mod_vlan_vid:3,NORMAL ) replace the VLAN tag 172 with internal VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing. Additional resources Section 4.4, "How does the flat provider network packet flow work?" 4.8. Troubleshooting instance-physical network connections on VLAN provider networks Refer to the packet flow described in "How does the VLAN provider network packet flow work?" when troubleshooting connectivity in a VLAN provider network. In addition, review the following configuration options: Procedure Verify that physical network name used in the bridge_mapping configuration matches the physical network name. Example Sample output Example Sample output In this sample output, the physical network name, physnet1 , matches the name used in the bridge_mapping configuration: Confirm that the network was created as external , is type vlan , and uses the correct segmentation_id value: Example Sample output Review the patch-peer. Verify that br-int and br-ex are connected using a patch-peer int-br-ex <--> phy-br-ex . This connection is created while restarting neutron-openvswitch-agent , provided that the bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini . Recheck the bridge_mapping setting if this is not created even after restarting the service. Review the network flows. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int , and verify that the flows map the internal VLAN IDs to the external VLAN ID ( segmentation_id ). For incoming packets, map the external VLAN ID to the internal VLAN ID. This flow is added by the neutron OVS agent when you spawn an instance to this network for the first time. If this flow is not created after spawning the instance, ensure that the network is created as vlan , is external , and that the physical_network name is correct. In addition, re-check the bridge_mapping settings. Finally, re-check the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that br-ex includes port ethX , and that both ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of the ip a command. Example In this sample output, eth1 is a port in br-ex : Example Sample output In this sample output, eth1 has been added as a port, and that the kernel is configured to move all packets from the interface to the OVS bridge br-ex . This is demonstrated by the entry, master ovs-system . Additional resources Section 4.7, "How does the VLAN provider network packet flow work?" 4.9. Enabling multicast snooping for provider networks in an ML2/OVS deployment To prevent flooding multicast packets to every port in a Red Hat OpenStack Platform (RHOSP) provider network, you must enable multicast snooping. In RHOSP deployments that use the Modular Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS), you do this by declaring the RHOSP Orchestration (heat) NeutronEnableIgmpSnooping parameter in a YAML-formatted environment file. Important You should thoroughly test and understand any multicast snooping configuration before applying it to a production environment. Misconfiguration can break multicasting or cause erratic network behavior. Prerequisites Your configuration must only use ML2/OVS provider networks. Your physical routers must also have IGMP snooping enabled. That is, the physical router must send IGMP query packets on the provider network to solicit regular IGMP reports from multicast group members to maintain the snooping cache in OVS (and for physical networking). An RHOSP Networking service security group rule must be in place to allow inbound IGMP to the VM instances (or port security disabled). In this example, a rule is created for the ping_ssh security group: Example Procedure On the undercloud host, logged in as the stack user, create a custom YAML environment file. Example Tip The Orchestration service (heat) uses a set of plans called templates to install and configure your environment. You can customize aspects of the overcloud with a custom environment file, which is a special type of template that provides customization for your heat templates. In the YAML environment file under parameter_defaults , set NeutronEnableIgmpSnooping to true. Important Ensure that you add a whitespace character between the colon (:) and true . Run the openstack overcloud deploy command and include the core heat templates, environment files, and this new custom environment file. Important The order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Example Verification Verify that the multicast snooping is enabled. Example Sample output Additional resources Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide Networking (neutron) Parameters in the Overcloud Parameters guide Creating a security group in the Creating and Managing Instances guide 4.10. Enabling multicast in an ML2/OVN deployment To support multicast traffic, modify the deployment's security configuration to allow multicast traffic to reach the virtual machine (VM) instances in the multicast group. To prevent multicast traffic flooding, enable IGMP snooping. Important Test and understand any multicast snooping configuration before applying it to a production environment. Misconfiguration can break multicasting or cause erratic network behavior. Prerequisites An OpenStack deployment with the ML2/OVN mechanism driver. Procedure Configure security to allow multicast traffic to the appropriate VM instances. For instance, create a pair of security group rules to allow IGMP traffic from the IGMP querier to enter and exit the VM instances, and a third rule to allow multicast traffic. Example A security group mySG allows IGMP traffic to enter and exit the VM instances. Another rule allows multicast traffic to reach VM instances. As an alternative to setting security group rules, some operators choose to selectively disable port security on the network. If you choose to disable port security, consider and plan for any related security risks. Set the heat parameter NeutronEnableIgmpSnooping: True in an environment file on the undercloud node. For instance, add the following lines to ovn-extras.yaml. Example Include the environment file in the openstack overcloud deploy command with any other environment files that are relevant to your environment and deploy the overcloud. Replace <other_overcloud_environment_files> with the list of environment files that are part of your existing deployment. Verification Verify that the multicast snooping is enabled. List the northbound database Logical_Switch table. Sample output The Networking Service (neutron) igmp_snooping_enable configuration is translated into the mcast_snoop option set in the other_config column of the Logical_Switch table in the OVN Northbound Database. Note that mcast_flood_unregistered is always "false". Show the IGMP groups. Sample output Additional resources Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform Environment files in the Director Installation and Usage guide Including environment files in overcloud creation in the Director Installation and Usage guide 4.11. Enabling Compute metadata access Instances connected as described in this chapter are directly attached to the provider external networks, and have external routers configured as their default gateway. No OpenStack Networking (neutron) routers are used. This means that neutron routers cannot be used to proxy metadata requests from instances to the nova-metadata server, which may result in failures while running cloud-init . However, this issue can be resolved by configuring the dhcp agent to proxy metadata requests. You can enable this functionality in /etc/neutron/dhcp_agent.ini . For example: 4.12. Floating IP addresses You can use the same network to allocate floating IP addresses to instances, even if the floating IPs are already associated with private networks. The addresses that you allocate as floating IPs from this network are bound to the qrouter-xxx namespace on the Network node, and perform DNAT-SNAT to the associated private IP address. In contrast, the IP addresses that you allocate for direct external network access are bound directly inside the instance, and allow the instance to communicate directly with external network.
[ "vi /home/stack/templates/my-modules-environment.yaml", "parameter_defaults: NeutronBridgeMappings: 'physnet1:br-net1,physnet2:br-net2'", "- type: ovs_bridge name: br-net1 mtu: 1500 use_dhcp: false members: - type: interface name: eth0 mtu: 1500 use_dhcp: false primary: true - type: ovs_bridge name: br-net2 mtu: 1500 use_dhcp: false members: - type: interface name: eth1 mtu: 1500 use_dhcp: false primary: true", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "openstack network create --share --provider-network-type flat --provider-physical-network physnet1 --external public01", "openstack subnet create --no-dhcp --allocation-pool start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01 public_subnet", "openstack server create --image rhel --flavor my_flavor --network public01 my_instance", "brctl show qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7 tap269d4d73-e7", "ovs-vsctl show Bridge br-int fail_mode: secure Interface \"qvof63599ba-8f\" Port \"qvo269d4d73-e7\" tag: 5 Interface \"qvo269d4d73-e7\"", "ovs-vsctl show Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "ovs-ofctl show br-ex OFPT_FEATURES_REPLY (xid=0x2): dpid:00003440b5c90dc6 n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): cookie=0x0, duration=4703.491s, table=0, n_packets=3620, n_bytes=333744, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=3890.038s, table=0, n_packets=13, n_bytes=1714, idle_age=3764, priority=4,in_port=2,dl_vlan=5 actions=strip_vlan,NORMAL cookie=0x0, duration=4702.644s, table=0, n_packets=10650, n_bytes=447632, idle_age=0, priority=2,in_port=2 actions=drop", "ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x2): dpid:00004e67212f644d n_tables:254, n_buffers:256 capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE 15(int-br-ex): addr:12:4e:44:a9:50:f4 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=5351.536s, table=0, n_packets=12118, n_bytes=510456, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=4537.553s, table=0, n_packets=3489, n_bytes=321696, idle_age=0, priority=3,in_port=15,vlan_tci=0x0000 actions=mod_vlan_vid:5,NORMAL cookie=0x0, duration=5350.365s, table=0, n_packets=628, n_bytes=57892, idle_age=4538, priority=2,in_port=15 actions=drop cookie=0x0, duration=5351.432s, table=23, n_packets=0, n_bytes=0, idle_age=5351, priority=0 actions=drop", "openstack network show provider-flat", "| provider:physical_network | physnet1", "grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini", "bridge_mappings = physnet1:br-ex", "openstack network show provider-flat", "| provider:network_type | flat | | router:external | True |", "ovs-vsctl show", "Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"", "ip a 5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000", "vi /home/stack/templates/my-modules-environment.yaml", "parameter_defaults: NeutronTypeDrivers: vxlan,flat,vlan", "parameter_defaults: NeutronTypeDrivers: 'vxlan,flat,vlan' NeutronNetworkVLANRanges: 'physnet1:171:172'", "parameter_defaults: NeutronTypeDrivers: 'vxlan,flat,vlan' NeutronNetworkVLANRanges: 'physnet1:171:172' NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-int'", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-environment.yaml", "openstack network create --provider-network-type vlan --provider-physical-network physnet1 --provider-segment 171 provider-vlan171 openstack network create --provider-network-type vlan --provider-physical-network physnet1 --provider-segment 172 provider-vlan172", "openstack subnet create --network provider-171 --subnet-range 10.65.217.0/24 --dhcp --gateway 10.65.217.254 subnet-provider-171 openstack subnet create --network provider-172 --subnet-range 10.65.218.0/24 --dhcp --gateway 10.65.218.254 subnet-provider-172", "brctl show bridge name bridge id STP enabled interfaces qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63 tap84878b78-63 qbr86257b61-5d 8000.3a3c888eeae6 no qvb86257b61-5d tap86257b61-5d", "options: {peer=phy-br-ex} Port \"qvo86257b61-5d\" tag: 3 Interface \"qvo86257b61-5d\" Port \"qvo84878b78-63\" tag: 2 Interface \"qvo84878b78-63\"", "Bridge br-int fail_mode: secure Port int-br-ex Interface int-br-ex type: patch options: {peer=phy-br-ex}", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port br-ex Interface br-ex type: internal", "ovs-ofctl show br-ex 4(phy-br-ex): addr:32:e7:a1:6b:90:3e config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-ex NXST_FLOW reply (xid=0x4): NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6527.527s, table=0, n_packets=29211, n_bytes=2725576, idle_age=0, priority=1 actions=NORMAL cookie=0x0, duration=2939.172s, table=0, n_packets=117, n_bytes=8296, idle_age=58, priority=4,in_port=4,dl_vlan=3 actions=mod_vlan_vid:172,NORMAL cookie=0x0, duration=6111.389s, table=0, n_packets=145, n_bytes=9368, idle_age=98, priority=4,in_port=4,dl_vlan=2 actions=mod_vlan_vid:171,NORMAL cookie=0x0, duration=6526.675s, table=0, n_packets=82, n_bytes=6700, idle_age=2462, priority=2,in_port=4 actions=drop", "ovs-ofctl show br-int 18(int-br-ex): addr:fe:b7:cb:03:c5:c1 config: 0 state: 0 speed: 0 Mbps now, 0 Mbps max", "ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6770.572s, table=0, n_packets=1239, n_bytes=127795, idle_age=106, priority=1 actions=NORMAL cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL cookie=0x0, duration=6353.898s, table=0, n_packets=5077, n_bytes=482582, idle_age=0, priority=3,in_port=18,dl_vlan=171 actions=mod_vlan_vid:2,NORMAL cookie=0x0, duration=6769.391s, table=0, n_packets=22301, n_bytes=2013101, idle_age=0, priority=2,in_port=18 actions=drop cookie=0x0, duration=6770.463s, table=23, n_packets=0, n_bytes=0, idle_age=6770, priority=0 actions=drop", "cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0, priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL", "openstack network show provider-vlan171", "| provider:physical_network | physnet1", "grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini", "bridge_mappings = physnet1:br-ex", "openstack network show provider-vlan171", "| provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 171 |", "ovs-vsctl show", "ovs-vsctl show", "Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port \"eth1\" Interface \"eth1\"", "ip a", "5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP qlen 1000", "openstack security group rule create --protocol igmp --ingress ping_ssh", "vi /home/stack/templates/my-ovs-environment.yaml", "parameter_defaults: NeutronEnableIgmpSnooping: true", "openstack overcloud deploy --templates -e [your-environment-files] -e /usr/share/openstack-tripleo-heat-templates/environments/services/my-ovs-environment.yaml", "sudo ovs-vsctl list bridge br-int", "mcast_snooping_enable: true other_config: {mac-table-size=\"50000\", mcast-snooping-disable-flood-unregistered=True}", "openstack security group rule create --protocol igmp --ingress mySG openstack security group rule create --protocol igmp --egress mySG", "openstack security group rule create --protocol udp mySG", "parameter_defaults: NeutronEnableIgmpSnooping: True", "openstack overcloud deploy --templates ... -e <other_overcloud_environment_files> -e ovn-extras.yaml ...", "ovn-nbctl list Logical_Switch", "_uuid : d6a2fbcd-aaa4-4b9e-8274-184238d66a15 other_config : {mcast_flood_unregistered=\"false\", mcast_snoop=\"true\"}", "ovn-sbctl list IGMP_group", "_uuid : 2d6cae4c-bd82-4b31-9c63-2d17cbeadc4e address : \"225.0.0.120\" chassis : 34e25681-f73f-43ac-a3a4-7da2a710ecd3 datapath : eaf0f5cc-a2c8-4c30-8def-2bc1ec9dcabc ports : [5eaf9dd5-eae5-4749-ac60-4c1451901c56, 8a69efc5-38c5-48fb-bbab-30f2bf9b8d45]", "enable_isolated_metadata = True" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/connect-instance_rhosp-network
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/security_architecture/proc_providing-feedback-on-red-hat-documentation_security-architecture
5.57. e2fsprogs
5.57. e2fsprogs 5.57.1. RHBA-2012:0944 - e2fsprogs bug fix update Updated e2fsprogs packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The e2fsprogs packages provide a number of utilities for creating, checking, modifying, and correcting any inconsistencies in second (ext2), third (ext3), and fourth (ext4) extended file systems. Bug Fixes BZ# 786021 Prior to this update, checksums for backup group descriptors appeared to be wrong when the "e2fsck -b" option read these group descriptors and cleared UNINIT flags to ensure that all inodes were scanned. As a consequence, warning messages were sent during the process. This update recomputes checksums after the flags are changed. Now, "e2fsck -b" completes without these checksum warnings. BZ# 795846 Prior to this update, e2fsck could discard valid inodes when using the "-E discard" option. As a consequence, the file system could become corrupted. This update modifies the underlying code so that disk regions containing valid inodes are no longer discarded. All users of e2fsprogs are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/e2fsprogs
Chapter 32. Using Ansible to automount NFS shares for IdM users
Chapter 32. Using Ansible to automount NFS shares for IdM users Automount is a way to manage, organize, and access directories across multiple systems. Automount automatically mounts a directory whenever access to it is requested. This works well within an Identity Management (IdM) domain as it allows you to share directories on clients within the domain easily. You can use Ansible to configure NFS shares to be mounted automatically for IdM users logged in to IdM clients in an IdM location. The example in this chapter uses the following scenario: nfs-server.idm.example.com is the fully-qualified domain name (FQDN) of a Network File System (NFS) server. nfs-server.idm.example.com is an IdM client located in the raleigh automount location. The NFS server exports the /exports/project directory as read-write. Any IdM user belonging to the developers group can access the contents of the exported directory as /devel/project/ on any IdM client that is located in the same raleigh automount location as the NFS server. idm-client.idm.example.com is an IdM client located in the raleigh automount location. Important If you want to use a Samba server instead of an NFS server to provide the shares for IdM clients, see the Red Hat Knowledgebase solution How do I configure kerberized CIFS mounts with Autofs in an IPA environment? . The chapter contains the following sections: Autofs and automount in IdM Setting up an NFS server with Kerberos in IdM Configuring automount locations, maps, and keys in IdM by using Ansible Using Ansible to add IdM users to a group that owns NFS shares Configuring automount on an IdM client Verifying that an IdM user can access NFS shares on an IdM client 32.1. Autofs and automount in IdM The autofs service automates the mounting of directories, as needed, by directing the automount daemon to mount directories when they are accessed. In addition, after a period of inactivity, autofs directs automount to unmount auto-mounted directories. Unlike static mounting, on-demand mounting saves system resources. Automount maps On a system that utilizes autofs , the automount configuration is stored in several different files. The primary automount configuration file is /etc/auto.master , which contains the master mapping of automount mount points, and their associated resources, on a system. This mapping is known as automount maps . The /etc/auto.master configuration file contains the master map . It can contain references to other maps. These maps can either be direct or indirect. Direct maps use absolute path names for their mount points, while indirect maps use relative path names. Automount configuration in IdM While automount typically retrieves its map data from the local /etc/auto.master and associated files, it can also retrieve map data from other sources. One common source is an LDAP server. In the context of Identity Management (IdM), this is a 389 Directory Server. If a system that uses autofs is a client in an IdM domain, the automount configuration is not stored in local configuration files. Instead, the autofs configuration, such as maps, locations, and keys, is stored as LDAP entries in the IdM directory. For example, for the idm.example.com IdM domain, the default master map is stored as follows: Additional resources Mounting file systems on demand 32.2. Setting up an NFS server with Kerberos in a Red Hat Enterprise Linux Identity Management domain If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption. Prerequisites The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain. The NFS server is running and configured. Procedure Obtain a kerberos ticket as an IdM administrator: Create a nfs/<FQDN> service principal: Retrieve the nfs service principal from IdM, and store it in the /etc/krb5.keytab file: Optional: Display the principals in the /etc/krb5.keytab file: By default, the IdM client adds the host principal to the /etc/krb5.keytab file when you join the host to the IdM domain. If the host principal is missing, use the ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab command to add it. Use the ipa-client-automount utility to configure mapping of IdM IDs: Update your /etc/exports file, and add the Kerberos security method to the client options. For example: If you want that your clients can select from multiple security methods, specify them separated by colons: Reload the exported file systems: 32.3. Configuring automount locations, maps, and keys in IdM by using Ansible As an Identity Management (IdM) system administrator, you can configure automount locations and maps in IdM so that IdM users in the specified locations can access shares exported by an NFS server by navigating to specific mount points on their hosts. Both the exported NFS server directory and the mount points are specified in the maps. In LDAP terms, a location is a container for such map entries. The example describes how to use Ansible to configure the raleigh location and a map that mounts the nfs-server.idm.example.com:/exports/project share on the /devel/project mount point on the IdM client as a read-write directory. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Copy the automount-location-present.yml Ansible playbook file located in the /usr/share/doc/ansible-freeipa/playbooks/automount/ directory: Open the automount-location-map-and-key-present.yml file for editing. Adapt the file by setting the following variables in the ipaautomountlocation task section: Set the ipaadmin_password variable to the password of the IdM admin . Set the name variable to raleigh . Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Continue editing the automount-location-map-and-key-present.yml file: In the tasks section, add a task to ensure the presence of an automount map: Add another task to add the mount point and NFS server information to the map: Add another task to ensure auto.devel is connected to auto.master : Save the file. Run the Ansible playbook and specify the playbook and inventory files: 32.4. Using Ansible to add IdM users to a group that owns NFS shares As an Identity Management (IdM) system administrator, you can use Ansible to create a group of users that is able to access NFS shares, and add IdM users to this group. This example describes how to use an Ansible playbook to ensure that the idm_user account belongs to the developers group, so that idm_user can access the /exports/project NFS share. Prerequisites You have root access to the nfs-server.idm.example.com NFS server, which is an IdM client located in the raleigh automount location. On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. In ~/ MyPlaybooks / , you have created the automount-location-map-and-key-present.yml file that already contains tasks from Configuring automount locations, maps, and keys in IdM by using Ansible . Procedure On your Ansible control node, navigate to the ~/ MyPlaybooks / directory: Open the automount-location-map-and-key-present.yml file for editing. In the tasks section, add a task to ensure that the IdM developers group exists and idm_user is added to this group: Save the file. Run the Ansible playbook and specify the playbook and inventory files: On the NFS server, change the group ownership of the /exports/project directory to developers so that every IdM user in the group can access the directory: 32.5. Configuring automount on an IdM client As an Identity Management (IdM) system administrator, you can configure automount services on an IdM client so that NFS shares configured for a location to which the client has been added are accessible to an IdM user automatically when the user logs in to the client. The example describes how to configure an IdM client to use automount services that are available in the raleigh location. Prerequisites You have root access to the IdM client. You are logged in as IdM administrator. The automount location exists. The example location is raleigh . Procedure On the IdM client, enter the ipa-client-automount command and specify the location. Use the -U option to run the script unattended: Stop the autofs service, clear the SSSD cache, and start the autofs service to load the new configuration settings: 32.6. Verifying that an IdM user can access NFS shares on an IdM client As an Identity Management (IdM) system administrator, you can test if an IdM user that is a member of a specific group can access NFS shares when logged in to a specific IdM client. In the example, the following scenario is tested: An IdM user named idm_user belonging to the developers group can read and write the contents of the files in the /devel/project directory automounted on idm-client.idm.example.com , an IdM client located in the raleigh automount location. Prerequisites You have set up an NFS server with Kerberos on an IdM host . You have configured automount locations, maps, and mount points in IdM in which you configured how IdM users can access the NFS share. You have used Ansible to add IdM users to the developers group that owns the NFS shares . You have configured automount on the IdM client . Procedure Verify that the IdM user can access the read-write directory: Connect to the IdM client as the IdM user: Obtain the ticket-granting ticket (TGT) for the IdM user: Optional: View the group membership of the IdM user: Navigate to the /devel/project directory: List the directory contents: Add a line to the file in the directory to test the write permission: Optional: View the updated contents of the file: The output confirms that idm_user can write into the file.
[ "dn: automountmapname=auto.master,cn=default,cn=automount,dc=idm,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master", "kinit admin", "ipa service-add nfs/nfs_server.idm.example.com", "ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab", "klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 1 nfs/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected] 7 host/[email protected]", "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs", "/nfs/projects/ 192.0.2.0/24(rw, sec=krb5i )", "/nfs/projects/ 192.0.2.0/24(rw, sec=krb5:krb5i:krb5p )", "exportfs -r", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/automount/automount-location-present.yml automount-location-map-and-key-present.yml", "--- - name: Automount location present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure automount location is present ipaautomountlocation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: raleigh state: present", "[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: ensure map named auto.devel in location raleigh is created ipaautomountmap: ipaadmin_password: \"{{ ipaadmin_password }}\" name: auto.devel location: raleigh state: present", "[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: ensure automount key /devel/project is present ipaautomountkey: ipaadmin_password: \"{{ ipaadmin_password }}\" location: raleigh mapname: auto.devel key: /devel/project info: nfs-server.idm.example.com:/exports/project state: present", "[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - name: Ensure auto.devel is connected in auto.master: ipaautomountkey: ipaadmin_password: \"{{ ipaadmin_password }}\" location: raleigh mapname: auto.map key: /devel info: auto.devel state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory automount-location-map-and-key-present.yml", "cd ~/ MyPlaybooks /", "[...] vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: [...] - ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: developers user: - idm_user state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory automount-location-map-and-key-present.yml", "chgrp developers /exports/project", "ipa-client-automount --location raleigh -U", "systemctl stop autofs ; sss_cache -E ; systemctl start autofs", "ssh [email protected] Password:", "kinit idm_user", "ipa user-show idm_user User login: idm_user [...] Member of groups: developers, ipausers", "cd /devel/project", "ls rw_file", "echo \"idm_user can write into the file\" > rw_file", "cat rw_file this is a read-write file idm_user can write into the file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-automount-nfs-shares-for-idm-users_using-ansible-to-install-and-manage-identity-management
Chapter 5. Migrating a non-containerized Red Hat Ceph Storage cluster to a containerized environment
Chapter 5. Migrating a non-containerized Red Hat Ceph Storage cluster to a containerized environment To manually migrate a non-containerized, bare-metal, Red Hat Ceph Storage cluster to a containerized environment, use the ceph-ansible switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook. Note If the storage cluster has an RBD mirror daemon not deployed by ceph-ansible , you need to migrate the daemons prior to converting to a containerized cluster. For more details, see Migrating RBD mirroring daemons . Prerequisites A running Red Hat Ceph Storage non-containerized, bare-metal, cluster. Access to the Ansible administration node. An ansible user account. Sudo access to the ansible user account. Procedure Edit the group_vars/all.yml file to include configuration for containers: Important For the ceph_docker_image_tag , use latest if your current storage cluster is on latest version or use the appropriate image tag. See the What are the Red Hat Ceph Storage releases and corresponding Ceph package versions? for more information. Navigate to the /usr/share/ceph-ansible directory: On the Ansible administration node, run the Ansible migration playbook: Syntax Example Verify the cluster is switched to containerized environment. On the monitor node, list all running containers: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Additional Resources See the Installing a Red Hat Ceph Storage cluster chapter in the Red Hat Ceph Storage Installation Guide for information on installation of a bare-metal storage cluster. See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide for providing sudo access to the ansible user. See the Configuring two-way mirroring using the command-line interface section in the Red Hat Ceph Storage Block Device Guide for more details.
[ "ceph_docker_image_tag: \"latest\" ceph_docker_image: rhceph/rhceph-4-rhel8 containerized_deployment: true ceph_docker_registry: registry.redhat.io", "[ansible@admin ~]USD cd /usr/share/ceph-ansible", "ansible-playbook ./infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml -i INVENTORY_FILE", "[ansible@admin ceph-ansible]USD ansible-playbook ./infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml -i hosts", "[root@mon ~]USD sudo docker ps", "[root@mon ~]USD sudo podman ps" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/operations_guide/migrating-a-non-containerized-red-hat-ceph-storage-cluster-to-a-containerized-environment_ops
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.2, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl526 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql10 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.1.3. Running a System Service from a Software Collection Running a System Service from a Software Collection in Red Hat Enterprise Linux 6 Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql96 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . Running a System Service from a Software Collection in Red Hat Enterprise Linux 7 In Red Hat Enterprise Linux 7, init scripts have been replaced by systemd service unit files, which end with the .service file extension and serve a similar purpose as init scripts. To start a service in the current session, execute the following command as root : systemctl start software_collection - service_name .service Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : systemctl enable software_collection - service_name .service For example, to start the postgresql service from the rh-postgresql10 Software Collection and enable it at boot time, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, refer to the Red Hat Enterprise Linux 7 System Administrator's Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.2 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb102 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.2: rhscl/devtoolset-8-toolchain-rhel7 rhscl/devtoolset-8-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/mysql-80-rhel7 rhscl/nginx-114-rhel7 rhscl/php-72-rhel7 rhscl/varnish-6-rhel7 The following container images are based on Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 (EOL) rhscl/devtoolset-6-perftools-rhel7 (EOL) rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 rhscl/php-56-rhel7 (EOL) rhscl/python-35-rhel7 rhscl/ruby-23-rhel7 The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 (EOL) rhscl/devtoolset-4-perftools-rhel7 (EOL) rhscl/mariadb-101-rhel7 rhscl/nginx-18-rhel7 (EOL) rhscl/nodejs-4-rhel7 (EOL) rhscl/postgresql-95-rhel7 rhscl/ror-42-rhel7 rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 (EOL) rhscl/mongodb-26-rhel7 (EOL) rhscl/mysql-56-rhel7 (EOL) rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 (EOL) rhscl/perl-520-rhel7 (EOL) rhscl/postgresql-94-rhel7 (EOL) rhscl/python-34-rhel7 (EOL) rhscl/ror-41-rhel7 (EOL) rhscl/ruby-22-rhel7 (EOL) rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl526 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql10 bash", "~]USD echo USDX_SCLS python27 rh-postgresql10", "~]# service rh-postgresql96-postgresql start Starting rh-postgresql96-postgresql service: [ OK ] ~]# chkconfig rh-postgresql96-postgresql on", "~]# systemctl start rh-postgresql10-postgresql.service ~]# systemctl enable rh-postgresql10-postgresql.service", "~]USD scl enable rh-mariadb102 \"man rh-mariadb102\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-usage
function::execname
function::execname Name function::execname - Returns the execname of a target process (or group of processes) Synopsis Arguments None Description Returns the execname of a target process (or group of processes).
[ "execname:string()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-execname
Chapter 7. Installing a cluster on Azure into an existing VNet
Chapter 7. Installing a cluster on Azure into an existing VNet In OpenShift Container Platform version 4.16, you can install a cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 7.2. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 7.2.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 7.2.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 7.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 7.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 7.2.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 7.2.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Additional resources Installation configuration parameters for Azure 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 7.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 7.6.3. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 7.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 7.6.4. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 7.6.5. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 7.6.6. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 1 10 14 20 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 15 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 If you use an existing VNet, specify the name of the resource group that contains it. 17 If you use an existing VNet, specify its name. 18 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 19 If you use an existing VNet, specify the name of the subnet to host the compute machines. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.6.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 7.7. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring an Azure cluster to use short-term credentials To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created a global Microsoft Azure account for the ccoctl utility to use with the following permissions: Example 7.3. Required Azure permissions Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourceGroups/write Microsoft.Resources/subscriptions/resourceGroups/delete Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/delete Microsoft.Authorization/roleAssignments/write Microsoft.Authorization/roleDefinitions/read Microsoft.Authorization/roleDefinitions/write Microsoft.Authorization/roleDefinitions/delete Microsoft.Storage/storageAccounts/listkeys/action Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/blobServices/containers/delete Microsoft.Storage/storageAccounts/blobServices/containers/read Microsoft.ManagedIdentity/userAssignedIdentities/delete Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete Microsoft.Storage/register/action Microsoft.ManagedIdentity/register/action Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating Azure resources with the Cloud Credential Operator utility You can use the ccoctl azure create-all command to automate the creation of Azure resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Access to your Microsoft Azure account by using the Azure CLI. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. To enable the ccoctl utility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command: USD az login Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl azure create-all \ --name=<azure_infra_name> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --region=<azure_region> \ 3 --subscription-id=<azure_subscription_id> \ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ 6 --tenant-id=<azure_tenant_id> 7 1 Specify the user-defined name for all created Azure resources used for tracking. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Specify the Azure region in which cloud resources will be created. 4 Specify the Azure subscription ID to use. 5 Specify the directory containing the files for the component CredentialsRequest objects. 6 Specify the name of the resource group containing the cluster's base domain Azure DNS zone. 7 Specify the Azure tenant ID to use. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. To see additional optional parameters and explanations of how to use them, run the azure create-all --help command. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml You can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you used the ccoctl utility to create a new Azure resource group instead of using an existing resource group, modify the resourceGroupName parameter in the install-config.yaml as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> 1 # ... 1 This value must match the user-defined name for Azure resources that was specified with the --name argument of the ccoctl azure create-all command. If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4", "controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "az login", "ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7", "ls <path_to_ccoctl_output_dir>/manifests", "azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-vnet
Chapter 6. Directory Server in Red Hat Enterprise Linux
Chapter 6. Directory Server in Red Hat Enterprise Linux Directory Server no longer logs false positive error messages Previously, in a Directory Server multi-master replication environment, the Failed to update RUV for unknown error message was logged multiple times when only the replica update vector (RUV) was updated without any change. This update fixes the problem and now Directory Server no longer logs the error message. (BZ#1266920) In FIPS mode, the slapd_pk11_getInternalKeySlot() function is now used to retrieve the key slot for a token The Red Hat Directory Server previously tried to retrieve the key slot from a fixed token name, when FIPS mode was enabled on the security database. However, the token name can change. If the key slot is not found, Directory Server is unable to decode the replication manager's password and replication sessions fail. To fix the problem, the slapd_pk11_getInternalKeySlot() function now uses FIPS mode to retrieve the current key slot. As a result, replication sessions using SSL or STTARTTLS no longer fail in the described situation. (BZ# 1352109 ) Directory Server now supports configuring weak DH parameters The network security services (NSS) libraries, linked with the Red Hat Directory Server, require a minimum of 2048-bit Diffie-Hellman (DH) parameters. However, Java 1.6 and 1.7 supports only 1024-bit DH parameters. As a consequence, clients using these Java versions were unable to connect to Directory Server using encrypted connections. This update adds the allowWeakDHParam parameter to the cn=encryption,cn=config entry. As a result, if this parameter is enabled, affected clients can now connect using weak DH parameters. (BZ# 1327065 ) The cleanAllRUV task no longer corrupts changelog back ends At the end of the cleanAllRUV task, Directory Server removes entries from the replication changelog that contain the cleaned replica ID. Previously, the task incorrectly ran all changelog back ends instead of only the one set in the task. As a consequence, if multiple back ends contained the same replica ID, the cleanAllRUV task corrupted them. This update fixes the problem and now the cleanAllRUV task works correctly. (BZ# 1369572 ) Reindexing the retro changelog no longer fails Previously, the retrocl-plugin sets a lock in read mode on the changelog back end without releasing it. This could result in a deadlock situation. For example, an index task executed by the db2index.pl script on the retro changelog back end became unresponsive when a lock in write mode was set. This update applies a patch and as a result, reindexing the retro changelog no longer fails. (BZ# 1370145 ) Directory Server no longer fails when disabling the CLEAR password storage scheme plug-in Previously, Directory Server required that the CLEAR password storage plug-in was enabled when setting userPassword attributes. As a consequence, Directory Server terminated unexpectedly when attempting to set userPassword attributes, if CLEAR was disabled. This update applies a patch and as a result, Directory Server no longer fails in the described situation. (BZ# 1371678 ) Directory Server no longer terminates unexpectedly when using server side sorting Previously, when using a matching rule and server side sorting, Directory Server incorrectly frees memory multiple times and terminates unexpectedly. This update fixes the bug, and as a result Directory Server no longer fails when using server side sorting. (BZ# 1371706 ) Directory Server now validates macros in ACIs Previously, the Red Hat Directory Server did not validate macros in an access control instruction (ACI). As a result, users were able to set incorrect macros in an ACI. This update improves the code underlying validation, and Directory Server rejects invalid macros and logs an error. (BZ# 1382386 ) Replication monitor now shows the correct date On the replication monitor, the year of the date was not displayed in the header when the value of the day field was less than 10. The code now uses the correct API, and the year is displayed correctly. (BZ#1410645) The memberOf fix-up task now verifies arguments Previously, if an invalid filter or basedn parameter was provided in the memberOf fix-up task, and the task failed, no information was logged. A patch has been applied and now, if a problem occurs, an error is logged and the task status is updated. As a result, the administrator is now able to identify if a task failed. (BZ# 1406835 ) Directory Server no longer terminates unexpectedly when deleting a non-existent attribute Previously, deleting a non-existent attribute from the back end configuration caused Directory Server to terminate unexpectedly. This update applies a patch to pass a NULL value to the ldbm_config_set() function if no attribute was deleted. As a result, Directory Server now rejects the operation in the described scenario. (BZ# 1403754 ) Directory Server no longer displays multiple error messages when importing fails Previously, if importing data failed, multiple Unable to flush error message were be displayed, because the connection to the database was not closed. This update applies a patch and as a result, Directory Server no longer displays multiple errors in the mentioned situation. (BZ# 1402012 ) Virtual list view-related problems have been fixed Previously, when removing a virtual list view (VLV) index, the dblayer_erase_index_file_nolock() function was not called. Thus, the physical index file and the back pointer set to the dblayer handle were not removed. Consequently, Directory Server terminated unexpectedly. This fix updates the code and the dblayer_erase_index_file_nolock() function is now called when removing a VLV index. In addition, the vlv_init() function previously could be called multiple times without unregistering VLV plug-in callbacks. As a consequence, Directory Server sometimes terminated unexpectedly. With this update, callbacks are now unregistered. As a result, Directory Server no longer terminates unexpectedly in the described situations. (BZ# 1399600 ) Directory Server no longer logs sensitive information Previously, when the Trace function calls option was enabled in the nsslapd-errorlog-level parameter, Directory Server logged all attributes into the error log file, including attributes containing sensitive information. A patch has been applied to filter out values of sensitive attributes. As a result, Directory Server no longer logs sensitive information. (BZ# 1387772 ) Group ACIs are now correctly evaluated Previously, if the number of members in a group in an access control instruction (ACI) exceeded the size limit of the result of the query, Directory Server incorrectly denied access. To fix the problem, the server size limit is no longer applied to the ACI group evaluation, and queries now operate correctly. (BZ# 1387022 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/bug_fixes_directory_server_in_red_hat_enterprise_linux
Object Gateway Guide
Object Gateway Guide Red Hat Ceph Storage 8 Deploying, configuring, and administering a Ceph Object Gateway Red Hat Ceph Storage Documentation Team
[ "ceph orch apply mon --placement=\"host1 host2 host3\"", "service_type: mon placement: hosts: - host01 - host02 - host03", "ceph orch apply -i mon.yml", "ceph orch apply rgw example --placement=\"6 host1 host2 host3\"", "service_type: rgw service_id: example placement: count: 6 hosts: - host01 - host02 - host03", "ceph orch apply -i rgw.yml", "mon_pg_warn_max_per_osd = n", "ceph osd pool create .us-west.rgw.buckets.non-ec 64 64 replicated rgw-service", "## SAS-SSD ROOT DECLARATION ## root sas-ssd { id -1 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-sas-ssd weight 4.000 item data1-sas-ssd weight 4.000 item data0-sas-ssd weight 4.000 }", "## INDEX ROOT DECLARATION ## root index { id -2 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item data2-index weight 1.000 item data1-index weight 1.000 item data0-index weight 1.000 }", "host data2-sas-ssd { id -11 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.0 weight 1.000 item osd.1 weight 1.000 item osd.2 weight 1.000 item osd.3 weight 1.000 }", "host data2-index { id -21 # do not change unnecessarily # weight 0.000 alg straw hash 0 # rjenkins1 item osd.4 weight 1.000 }", "osd_crush_update_on_start = false", "[osd.0] osd crush location = \"host=data2-sas-ssd\" [osd.1] osd crush location = \"host=data2-sas-ssd\" [osd.2] osd crush location = \"host=data2-sas-ssd\" [osd.3] osd crush location = \"host=data2-sas-ssd\" [osd.4] osd crush location = \"host=data2-index\"", "## SERVICE RULE DECLARATION ## rule rgw-service { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type rack step emit }", "## THROUGHPUT RULE DECLARATION ## rule rgw-throughput { type replicated min_size 1 max_size 10 step take sas-ssd step chooseleaf firstn 0 type host step emit }", "## INDEX RULE DECLARATION ## rule rgw-index { type replicated min_size 1 max_size 10 step take index step chooseleaf firstn 0 type rack step emit }", "rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }", "rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }", "rule ecpool-86 { step take default class hdd step choose indep 4 type host step choose indep 4 type osd step emit }", "rule ecpool-86 { type msr_indep step take default class hdd step choosemsr 4 type host step choosemsr 4 type osd step emit }", "[osd] osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_op_priority = 1", "ceph config set global osd_map_message_max 10 ceph config set osd osd_map_cache_size 20 ceph config set osd osd_map_share_max_epochs 10 ceph config set osd osd_pg_epoch_persisted_max_stale 10", "[osd] osd_scrub_begin_hour = 23 #23:01H, or 10:01PM. osd_scrub_end_hour = 6 #06:01H or 6:01AM.", "[osd] osd_scrub_load_threshold = 0.25", "objecter_inflight_ops = 24576", "rgw_thread_pool_size = 512", "ceph soft nofile unlimited", "USER_NAME soft nproc unlimited", "cephadm shell", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=test_realm --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=default --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default", "radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --master --default", "radosgw-admin period update --rgw-realm= REALM_NAME --commit", "radosgw-admin period update --rgw-realm=test_realm --commit", "ceph orch apply rgw NAME [--realm= REALM_NAME ] [--zone= ZONE_NAME ] [--zonegroup= ZONE_GROUP_NAME ] --placement=\" NUMBER_OF_DAEMONS [ HOST_NAME_1 HOST_NAME_2 ]\"", "ceph orch apply rgw test --realm=test_realm --zone=test_zone --zonegroup=default --placement=\"2 host01 host02\"", "ceph orch apply rgw SERVICE_NAME", "ceph orch apply rgw foo", "ceph orch host label add HOST_NAME_1 LABEL_NAME ceph orch host label add HOSTNAME_2 LABEL_NAME ceph orch apply rgw SERVICE_NAME --placement=\"label: LABEL_NAME count-per-host: NUMBER_OF_DAEMONS \" --port=8000", "ceph orch host label add host01 rgw # the 'rgw' label can be anything ceph orch host label add host02 rgw ceph orch apply rgw foo --placement=\"label:rgw count-per-host:2\" --port=8000", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=rgw", "cephadm shell", "cat nfs-conf.yml service_type: nfs service_id: nfs-rgw-service placement: hosts: ['host1'] spec: port: 2049", "ceph orch apply -i nfs-conf.yml", "ceph orch ls --service_name nfs.nfs-rgw-service --service_type nfs", "touch radosgw.yml", "service_type: rgw service_id: REALM_NAME . ZONE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 count_per_host: NUMBER_OF_DAEMONS spec: rgw_realm: REALM_NAME rgw_zone: ZONE_NAME rgw_zonegroup: ZONE_GROUP_NAME rgw_frontend_port: FRONT_END_PORT networks: - NETWORK_CIDR # Ceph Object Gateway service binds to a specific network", "service_type: rgw service_id: default placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: default rgw_zone: default rgw_zonegroup: default rgw_frontend_port: 1234 networks: - 192.169.142.0/24", "radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=test_zonegroup --default radosgw-admin zone create --rgw-zonegroup=test_zonegroup --rgw-zone=test_zone --default radosgw-admin period update --rgw-realm=test_realm --commit", "service_type: rgw service_id: test_realm.test_zone placement: hosts: - host01 - host02 - host03 count_per_host: 2 spec: rgw_realm: test_realm rgw_zone: test_zone rgw_zonegroup: test_zonegroup rgw_frontend_port: 1234 networks: - 192.169.142.0/24", "cephadm shell --mount radosgw.yml:/var/lib/ceph/radosgw/radosgw.yml", "ceph orch apply -i FILE_NAME .yml", "ceph orch apply -i /var/lib/ceph/radosgw/radosgw.yml", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=rgw", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=test_realm --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --master --default", "radosgw-admin zone create --rgw-zonegroup= PRIMARY_ZONE_GROUP_NAME --rgw-zone= PRIMARY_ZONE_NAME --endpoints=http:// RGW_PRIMARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-1 --endpoints=http://rgw1:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zonegroup delete --rgw-zonegroup=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "radosgw-admin user create --uid= USER_NAME --display-name=\" USER_NAME \" --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --system", "radosgw-admin user create --uid=zone.user --display-name=\"Zone user\" --system", "radosgw-admin zone modify --rgw-zone= PRIMARY_ZONE_NAME --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin zone modify --rgw-zone=us-east-1 --access-key=NE48APYCAODEPLKBCZVQ--secret=u24GHQWRE3yxxNBnFBzjM4jn14mFIckQ4EKL6LoW", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-1.host01.ahdtsw.service systemctl enable [email protected]_realm.us-east-1.host01.ahdtsw.service", "radosgw-admin realm pull --rgw-realm= PRIMARY_REALM --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY --default", "radosgw-admin realm pull --rgw-realm=test_realm --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ --default", "radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= SECONDARY_ZONE_NAME --endpoints=http:// RGW_SECONDARY_HOSTNAME : RGW_PRIMARY_PORT_NUMBER_1 --access-key= SYSTEM_ACCESS_KEY --secret= SYSTEM_SECRET_KEY --endpoints=http:// FQDN :80 [--read-only]", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "ceph config set SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME", "ceph config set rgw rgw_zone us-east-2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service", "ceph orch apply rgw NAME --realm= REALM_NAME --zone= PRIMARY_ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw east --realm=test_realm --zone=us-east-1 --placement=\"2 host01 host02\"", "radosgw-admin sync status", "cephadm shell", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm rgw.test_realm.test_zone_bb", "ceph orch ps", "ceph orch ps", "cephadm shell", "ceph mgr module enable rgw", "ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]", "ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.", "rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - _HOSTNAME_1_ - _HOSTNAME_2_", "cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02", "service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - _hostname1_ - _hostname2_", "service_type: rgw placement: hosts: - _host1_ - _host2_ spec: rgw_realm: my_realm rgw_zonegroup: my_zonegroup rgw_zone: my_zone zonegroup_hostnames: - foo - bar", "cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml", "ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml", "ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]", "ceph orch list --daemon-type=rgw NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID rgw.myrealm.myzonegroup.ceph-saya-6-osd-host01.eburst ceph-saya-6-osd-host01 *:80 running (111m) 9m ago 111m 82.3M - 17.2.6-22.el9cp 2d5b080de0b0 2f3eaca7e88e", "radosgw-admin zonegroup get --rgw-zonegroup _zone_group_name_", "radosgw-admin zonegroup get --rgw-zonegroup my_zonegroup { \"id\": \"02a175e2-7f23-4882-8651-6fbb15d25046\", \"name\": \"my_zonegroup_ck\", \"api_name\": \"my_zonegroup_ck\", \"is_master\": true, \"endpoints\": [ \"http://vm-00:80\" ], \"hostnames\": [ \"foo\" \"bar\" ], \"hostnames_s3website\": [], \"master_zone\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"zones\": [ { \"id\": \"f42fea84-a89e-4995-996e-61b7223fb0b0\", \"name\": \"my_zone_ck\", \"endpoints\": [ \"http://vm-00:80\" ], \"log_meta\": false, \"log_data\": false, \"bucket_index_max_shards\": 11, \"read_only\": false, \"tier_type\": \"\", \"sync_from_all\": true, \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"compress-encrypted\", \"resharding\" ] } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"439e9c37-4ddc-43a3-99e9-ea1f3825bb51\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ] }", "cephadm shell", "ceph mgr module enable rgw", "ceph rgw realm bootstrap [--realm name REALM_NAME ] [--zonegroup-name ZONEGROUP_NAME ] [--zone-name ZONE_NAME ] [--port PORT_NUMBER ] [--placement HOSTNAME ] [--start-radosgw]", "ceph rgw realm bootstrap --realm-name myrealm --zonegroup-name myzonegroup --zone-name myzone --port 5500 --placement=\"host01 host02\" --start-radosgw Realm(s) created correctly. Please, use 'ceph rgw realm tokens' to get the token.", "rgw_realm: REALM_NAME rgw_zonegroup: ZONEGROUP_NAME rgw_zone: ZONE_NAME placement: hosts: - HOSTNAME_1 - HOSTNAME_2 spec: rgw_frontend_port: PORT_NUMBER zone_endpoints: http:// RGW_HOSTNAME_1 : RGW_PORT_NUMBER_1 , http:// RGW_HOSTNAME_2 : RGW_PORT_NUMBER_2", "cat rgw.yaml rgw_realm: myrealm rgw_zonegroup: myzonegroup rgw_zone: myzone placement: hosts: - host01 - host02 spec: rgw_frontend_port: 5500 zone_endpoints: http://<rgw_host1>:<rgw_port1>, http://<rgw_host2>:<rgw_port2>", "cephadm shell --mount rgw.yaml:/var/lib/ceph/rgw/rgw.yaml", "ceph rgw realm bootstrap -i /var/lib/ceph/rgw/rgw.yaml", "ceph rgw realm tokens | jq [ { \"realm\": \"myrealm\", \"token\": \"ewogICAgInJlYWxtX25hbWUiOiAibXlyZWFsbSIsCiAgICAicmVhbG1faWQiOiAiZDA3YzAwZWYtOTA0MS00ZjZlLTg4MDQtN2Q0MDI0MDU1NmFlIiwKICAgICJlbmRwb2ludCI6ICJodHRwOi8vdm0tMDA6NDMyMSIsCiAgICAiYWNjZXNzX2tleSI6ICI5NTY1VFZSMVFWTExFRzdVNFIxRCIsCiAgICAic2VjcmV0IjogImQ3b0FJQXZrNEdYeXpyd3Q2QVZ6bEZNQmNnRG53RVdMMHFDenE3cjUiCn1=\" } ]", "cat zone-spec.yaml rgw_zone: my-secondary-zone rgw_realm_token: <token> placement: hosts: - ceph-node-1 - ceph-node-2 spec: rgw_frontend_port: 5500", "cephadm shell --mount zone-spec.yaml:/var/lib/ceph/radosgw/zone-spec.yaml", "ceph mgr module enable rgw", "ceph rgw zone create -i /var/lib/ceph/radosgw/zone-spec.yaml", "radosgw-admin realm list { \"default_info\": \"d07c00ef-9041-4f6e-8804-7d40240556ae\", \"realms\": [ \"myrealm\" ] }", "bucket-name.domain-name.com", "address=/. HOSTNAME_OR_FQDN / HOST_IP_ADDRESS", "address=/.gateway-host01/192.168.122.75", "USDTTL 604800 @ IN SOA gateway-host01. root.gateway-host01. ( 2 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS gateway-host01. @ IN A 192.168.122.113 * IN CNAME @", "ping mybucket. HOSTNAME", "ping mybucket.gateway-host01", "radosgw-admin zonegroup get --rgw-zonegroup= ZONEGROUP_NAME > zonegroup.json", "radosgw-admin zonegroup get --rgw-zonegroup=us > zonegroup.json", "cp zonegroup.json zonegroup.backup.json", "cat zonegroup.json { \"id\": \"d523b624-2fa5-4412-92d5-a739245f0451\", \"name\": \"asia\", \"api_name\": \"asia\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"zones\": [ { \"id\": \"d2a3b90f-f4f3-4d38-ac1f-6463a2b93c32\", \"name\": \"india\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"d7e2ad25-1630-4aee-9627-84f24e13017f\", \"sync_policy\": { \"groups\": [] } }", "\"hostnames\": [\"host01\", \"host02\",\"host03\"],", "radosgw-admin zonegroup set --rgw-zonegroup= ZONEGROUP_NAME --infile=zonegroup.json", "radosgw-admin zonegroup set --rgw-zonegroup=us --infile=zonegroup.json", "radosgw-admin period update --commit", "[client.rgw.node1] rgw frontends = beast ssl_endpoint=192.168.0.100:443 ssl_certificate=<path to SSL certificate>", "touch rgw.yml", "service_type: rgw service_id: SERVICE_ID service_name: SERVICE_NAME placement: hosts: - HOST_NAME spec: ssl: true rgw_frontend_ssl_certificate: CERT_HASH", "service_type: rgw service_id: foo service_name: rgw.foo placement: hosts: - host01 spec: ssl: true rgw_frontend_ssl_certificate: | -----BEGIN RSA PRIVATE KEY----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END CERTIFICATE-----", "ceph orch apply -i rgw.yml", "mkfs.ext4 nvme-drive-path", "mkfs.ext4 /dev/nvme0n1 mount /dev/nvme0n1 /mnt/nvme0n1/", "mkdir <nvme-mount-path>/cache-directory-name", "mkdir /mnt/nvme0n1/rgw_datacache", "chmod a+rwx nvme-mount-path ; chmod a+rwx rgw_d3n_l1_datacache_persistent_path", "chmod a+rwx /mnt/nvme0n1 ; chmod a+rwx /mnt/nvme0n1/rgw_datacache/", "\"extra_container_args: \"-v\" \"rgw_d3n_l1_datacache_persistent_path:rgw_d3n_l1_datacache_persistent_path\" \"", "cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/:/mnt/nvme0n1/rgw_datacache/\"", "\"extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\" \"", "cat rgw-spec.yml service_type: rgw service_id: rgw.test placement: hosts: host1 host2 count_per_host: 2 extra_container_args: \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw1/:/mnt/nvme0n1/rgw_datacache/rgw1/\" \"-v\" \"/mnt/nvme0n1/rgw_datacache/rgw2/:/mnt/nvme0n1/rgw_datacache/rgw2/\"", "ceph orch apply -i rgw-spec.yml", "ceph config set <client.rgw> <CONF-OPTION> <VALUE>", "rgw_d3n_l1_datacache_persistent_path=/mnt/nvme/rgw_datacache/", "rgw_d3n_l1_datacache_size=10737418240", "fallocate -l 1G ./1G.dat s3cmd mb s3://bkt s3cmd put ./1G.dat s3://bkt", "s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 13s 73.94 MB/s done", "ls -lh /mnt/nvme/rgw_datacache rw-rr. 1 ceph ceph 1.0M Jun 2 06:18 cc7f967c-0021-43b2-9fdf-23858e868663.615391.1_shadow.ZCiCtMWeu_19wb100JIEZ-o4tv2IyA_1", "s3cmd get s3://bkt/1G.dat /dev/shm/1G_get.dat download: 's3://bkt/1G.dat' -> './1G_get.dat' [1 of 1] 1073741824 of 1073741824 100% in 6s 155.07 MB/s done", "ceph config set client.rgw debug_rgw VALUE", "ceph config set client.rgw debug_rgw 20", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw. NAME .asok config set debug_rgw VALUE", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw.rgw.asok config set debug_rgw 20", "ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_enable_static_website true ceph config set client.rgw rgw_enable_apis s3,s3website ceph config set client.rgw rgw_dns_name objects-zonegroup.example.com ceph config set client.rgw rgw_dns_s3website_name objects-website-zonegroup.example.com ceph config set client.rgw rgw_resolve_cname true", "objects-zonegroup.domain.com. IN A 192.0.2.10 objects-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:10 *.objects-zonegroup.domain.com. IN CNAME objects-zonegroup.domain.com. objects-website-zonegroup.domain.com. IN A 192.0.2.20 objects-website-zonegroup.domain.com. IN AAAA 2001:DB8::192:0:2:20", "*.objects-website-zonegroup.domain.com. IN CNAME objects-website-zonegroup.domain.com.", "http://bucket1.objects-website-zonegroup.domain.com", "www.example.com. IN CNAME bucket2.objects-website-zonegroup.domain.com.", "http://www.example.com", "www.example.com. IN CNAME www.example.com.objects-website-zonegroup.domain.com.", "http://www.example.com", "www.example.com. IN A 192.0.2.20 www.example.com. IN AAAA 2001:DB8::192:0:2:20", "http://www.example.com", "[root@host01 ~] touch ingress.yaml", "service_type: ingress 1 service_id: SERVICE_ID 2 placement: 3 hosts: - HOST1 - HOST2 - HOST3 spec: backend_service: SERVICE_ID virtual_ip: IP_ADDRESS / CIDR 4 frontend_port: INTEGER 5 monitor_port: INTEGER 6 virtual_interface_networks: 7 - IP_ADDRESS / CIDR ssl_cert: | 8", "service_type: ingress service_id: rgw.foo placement: hosts: - host01.example.com - host02.example.com - host03.example.com spec: backend_service: rgw.foo virtual_ip: 192.168.1.2/24 frontend_port: 8080 monitor_port: 1967 virtual_interface_networks: - 10.10.0.0/16 ssl_cert: | -----BEGIN CERTIFICATE----- MIIEpAIBAAKCAQEA+Cf4l9OagD6x67HhdCy4Asqw89Zz9ZuGbH50/7ltIMQpJJU0 gu9ObNtIoC0zabJ7n1jujueYgIpOqGnhRSvsGJiEkgN81NLQ9rqAVaGpadjrNLcM bpgqJCZj0vzzmtFBCtenpb5l/EccMFcAydGtGeLP33SaWiZ4Rne56GBInk6SATI/ JSKweGD1y5GiAWipBR4C74HiAW9q6hCOuSdp/2WQxWT3T1j2sjlqxkHdtInUtwOm j5Ism276IndeQ9hR3reFR8PJnKIPx73oTBQ7p9CMR1J4ucq9Ny0J12wQYT00fmJp -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- MIIEBTCCAu2gAwIBAgIUGfYFsj8HyA9Zv2l600hxzT8+gG4wDQYJKoZIhvcNAQEL BQAwgYkxCzAJBgNVBAYTAklOMQwwCgYDVQQIDANLQVIxDDAKBgNVBAcMA0JMUjEM MAoGA1UECgwDUkhUMQswCQYDVQQLDAJCVTEkMCIGA1UEAwwbY2VwaC1zc2wtcmhj czUtOGRjeHY2LW5vZGU1MR0wGwYJKoZIhvcNAQkBFg5hYmNAcmVkaGF0LmNvbTAe -----END PRIVATE KEY-----", "service_type: ingress service_id: rgw.ssl # adjust to match your existing RGW service placement: hosts: - hostname1 - hostname2 spec: backend_service: rgw.rgw.ssl.ceph13 # adjust to match your existing RGW service virtual_ip: IP_ADDRESS/CIDR # ex: 192.168.20.1/24 frontend_port: INTEGER # ex: 443 monitor_port: INTEGER # ex: 1969 use_tcp_mode_over_rgw: True", "cephadm shell --mount ingress.yaml:/var/lib/ceph/radosgw/ingress.yaml", "ceph config set mgr mgr/cephadm/container_image_haproxy HAPROXY_IMAGE_ID ceph config set mgr mgr/cephadm/container_image_keepalived KEEPALIVED_IMAGE_ID", "ceph config set mgr mgr/cephadm/container_image_haproxy registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest ceph config set mgr mgr/cephadm/container_image_keepalived registry.redhat.io/rhceph/keepalived-rhel9:latest", "ceph orch apply -i /var/lib/ceph/radosgw/ingress.yaml", "ip addr show", "wget HOST_NAME", "wget host01.example.com", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <ListAllMyBucketsResult xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Owner> <ID>anonymous</ID> <DisplayName></DisplayName> </Owner> <Buckets> </Buckets> </ListAllMyBucketsResult>", "cephadm shell", "ceph nfs export create rgw --cluster-id NFS_CLUSTER_NAME --pseudo-path PATH_FROM_ROOT --user-id USER_ID", "ceph nfs export create rgw --cluster-id cluster1 --pseudo-path root/testnfs1/ --user-id nfsuser", "mount -t nfs IP_ADDRESS:PATH_FROM_ROOT -osync MOUNT_POINT", "mount -t nfs 10.0.209.0:/root/testnfs1 -osync /mnt/mount1", "cat ./haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 7000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s timeout client-fin 1s timeout server-fin 1s maxconn 6000 listen stats bind 0.0.0.0:1936 mode http log global maxconn 256 clitimeout 10m srvtimeout 10m contimeout 10m timeout queue 10m JTH start stats enable stats hide-version stats refresh 30s stats show-node ## stats auth admin:password stats uri /haproxy?stats stats admin if TRUE frontend main bind *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app maxconn 6000 backend static balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000 backend app balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000", "ceph config set osd osd_pool_default_pg_num 50 ceph config set osd osd_pool_default_pgp_num 50", "radosgw-admin realm create --rgw-realm REALM_NAME --default", "radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name NEW_ZONE_GROUP_NAME radosgw-admin zone rename --rgw-zone default --zone-new-name NEW_ZONE_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --api-name NEW_ZONE_GROUP_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --endpoints http://ENDPOINT --master --default", "radosgw-admin zone modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --rgw-zone NEW_ZONE_NAME --endpoints http://ENDPOINT --master --default", "radosgw-admin user create --uid USER_ID --display-name DISPLAY_NAME --access-key ACCESS_KEY --secret SECRET_KEY --system", "radosgw-admin period update --commit", "ceph orch ls | grep rgw", "ceph config set client.rgw.SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw.SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw.SERVICE_NAME rgw_zone PRIMARY_ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-1", "systemctl restart ceph-radosgw@rgw.`hostname -s`", "ceph orch restart _RGW_SERVICE_NAME_", "ceph orch restart rgw.rgwsvcid.mons-1.jwgwwp", "cephadm shell", "radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone create --rgw-zonegroup=_ZONE_GROUP_NAME_ --rgw-zone=_SECONDARY_ZONE_NAME_ --endpoints=http://_RGW_SECONDARY_HOSTNAME_:_RGW_PRIMARY_PORT_NUMBER_1_ --access-key=_SYSTEM_ACCESS_KEY_ --secret=_SYSTEM_SECRET_KEY_ [--read-only]", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service", "radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http:// FQDN : PORT },{http:// FQDN : PORT } --tier-type=archive", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://example.com:8080} --tier-type=archive", "radosgw-admin zone modify --rgw-zone archive --sync_from primary --sync_from_all false --sync-from-rm secondary radosgw-admin period update --commit", "ceph config set client.rgw rgw_max_objs_per_shard 50000", "<?xml version=\"1.0\" ?> <LifecycleConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Rule> <ID>delete-1-days-az</ID> <Filter> <Prefix></Prefix> <ArchiveZone /> 1 </Filter> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>", "radosgw-admin lc get --bucket BUCKET_NAME", "radosgw-admin lc get --bucket test-bkt { \"prefix_map\": { \"\": { \"status\": true, \"dm_expiration\": true, \"expiration\": 0, \"noncur_expiration\": 2, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"Rule 1\", \"rule\": { \"id\": \"Rule 1\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"2\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"\", \"obj_tags\": { \"tagset\": {} }, \"archivezone\": \"\" 1 }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": true } } ] }", "radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME --yes-i-really-mean-it", "radosgw-admin bucket link --uid arcuser1 --bucket arc1-deleted-da473fbbaded232dc5d1e434675c1068 --yes-i-really-mean-it", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default --read-only=false", "radosgw-admin period update --commit", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default", "radosgw-admin period update --commit", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only", "radosgw-admin period update --commit", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=ldc1 --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=ldc1 --zone=ldc1z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc1z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=ldc2 --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=ldc2 --zone=ldc2z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc2 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc2zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc2z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default", "radosgw-admin realm create --rgw-realm=rdc1 --default", "radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default", "radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com", "radosgw-admin user create --uid=\" SYNCHRONIZATION_USER \" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=rdc1 --zone=rdc1z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc1z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=rdc1 --zone=rdc2z --placement=\"1 host04\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc2z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin sync status", "radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)", "radosgw-admin sync status --rgw-realm RGW_REALM_NAME", "radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source", "radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z", "radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }", "radosgw-admin sync policy get --bucket= BUCKET_NAME", "radosgw-admin sync policy get --bucket=mybucket", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden", "radosgw-admin sync group create --group-id=mygroup1 --status=enabled", "radosgw-admin bucket sync run", "radosgw-admin bucket sync run", "radosgw-admin sync group modify --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden", "radosgw-admin sync group modify --group-id=mygroup1 --status=forbidden", "radosgw-admin bucket sync run", "radosgw-admin bucket sync run", "radosgw-admin sync group get --bucket= BUCKET_NAME --group-id= GROUP_ID", "radosgw-admin sync group get --group-id=mygroup", "radosgw-admin sync group remove --bucket= BUCKET_NAME --group-id= GROUP_ID", "radosgw-admin sync group remove --group-id=mygroup", "radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE", "radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE", "radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group flow remove --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET_ID --prefix= SOURCE_PREFIX --prefix-rm --tags-add= KEY1=VALUE1 , KEY2=VALUE2 ,.. --tags-rm= KEY1=VALUE1 , KEY2=VALUE2 , ... --dest-owner= OWNER_ID --storage-class= STORAGE_CLASS --mode= USER --uid= USER_ID", "radosgw-admin sync group pipe modify --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id=_DESTINATION_BUCKET-ID", "radosgw-admin sync group pipe modify --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1", "radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET , --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET-ID", "radosgw-admin sync group pipe remove --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1", "radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID", "radosgw-admin sync group pipe remove -bucket-name=mybuck --group-id=zonegroup --pipe-id=pipe", "radosgw-admin sync info --bucket= BUCKET_NAME --effective-zone-name= ZONE_NAME", "radosgw-admin sync info", "radosgw-admin sync group create --group-id=group1 --status=allowed", "radosgw-admin sync group flow create --group-id=group1 --flow-id=flow-mirror --flow-type=symmetrical --zones=us-east,us-west", "radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='*' --source-bucket='*' --dest-zones='*' --dest-bucket='*'", "radosgw-admin sync group modify --group-id=group1 --status=enabled", "radosgw-admin period update --commit", "radosgw-admin sync info -bucket buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }", "radosgw-admin sync group create --group-id= GROUP_ID --status=allowed", "radosgw-admin sync group create --group-id=group1 --status=allowed", "radosgw-admin sync group flow create --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME", "radosgw-admin sync group flow create --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2", "radosgw-admin sync group pipe create --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '", "radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'", "radosgw-admin period update --commit", "radosgw-admin sync info", "radosgw-admin sync group create --group-id= GROUP_ID --status=allowed --bucket= BUCKET_NAME", "radosgw-admin sync group create --group-id=group1 --status=allowed --bucket=buck", "radosgw-admin sync group flow create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME", "radosgw-admin sync group flow create --bucket-name=buck --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2", "radosgw-admin sync group pipe create --group-id= GROUP_ID --bucket-name= BUCKET_NAME --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '", "radosgw-admin sync group pipe create --group-id=group1 --bucket-name=buck --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync group modify --group-id=group1 --status=allowed", "radosgw-admin period update --commit", "radosgw-admin sync group create --bucket=buck --group-id=buck-default --status=enabled", "radosgw-admin sync group pipe create --bucket=buck --group-id=buck-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*'", "radosgw-admin bucket sync info --bucket buck realm 33157555-f387-44fc-b4b4-3f9c0b32cd66 (india) zonegroup 594f1f63-de6f-4e1e-90b6-105114d7ad55 (shared) zone ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5 (primary) bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1] source zone e0e75beb-4e28-45ff-8d48-9710de06dcd0 bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1]", "radosgw-admin sync info --bucket buck { \"id\": \"pipe1\", \"source\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck4 --group-id=buck4-default --status=enabled", "radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --source-bucket= SOURCE_BUCKET_NAME --dest-zones= DESTINATION_ZONE_NAME", "radosgw-admin sync group pipe create --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones='*' --source-bucket=buck5 --dest-zones='*'", "radosgw-admin sync group pipe modify --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones=us-west --source-bucket=buck5 --dest-zones='*'", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync info --bucket=buck4 { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [ \"buck4:115b12b3-....14433.2\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, } ] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck6 --group-id=buck6-default --status=enabled", "radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --dest-zones= DESTINATION_ZONE_NAME --dest-bucket= DESTINATION_BUCKET_NAME", "radosgw-admin sync group pipe create --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*' --dest-bucket=buck5", "radosgw-admin sync group pipe modify --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='us-west' --dest-bucket=buck5", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync info --bucket buck5 { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck6:c7887c5b-f6ff-4d5f-9736-aa5cdb4a15e8.20493.4\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck5\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"s3cmd\" } }, ], \"hints\": { \"sources\": [], \"dests\": [ \"buck5\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck1 --group-id=buck8-default --status=enabled", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --tags-add= KEY1 = VALUE1 , KEY2 = VALUE2 --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '", "radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-tags --tags-add=color=blue,color=red --source-zones='*' --dest-zones='*'", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --prefix= PREFIX --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '", "radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-prefix --prefix=foo/ --source-zones='*' --dest-zones='*' \\", "radosgw-admin sync info --bucket= BUCKET_NAME", "radosgw-admin sync info --bucket=buck1", "radosgw-admin sync group modify --group-id buck-default --status forbidden --bucket buck { \"groups\": [ { \"id\": \"buck-default\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"pipe1\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", } } ], \"status\": \"forbidden\" } ] }", "radosgw-admin sync info --bucket buck { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }", "radosgw-admin realm create --rgw-realm= REALM_NAME", "radosgw-admin realm create --rgw-realm=test_realm", "radosgw-admin realm default --rgw-realm= REALM_NAME", "radosgw-admin realm default --rgw-realm=test_realm1", "radosgw-admin realm default --rgw-realm=test_realm", "radosgw-admin realm delete --rgw-realm= REALM_NAME", "radosgw-admin realm delete --rgw-realm=test_realm", "radosgw-admin realm get --rgw-realm= REALM_NAME", "radosgw-admin realm get --rgw-realm=test_realm >filename.json", "{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"test_realm\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }", "radosgw-admin realm set --rgw-realm= REALM_NAME --infile= IN_FILENAME", "radosgw-admin realm set --rgw-realm=test_realm --infile=filename.json", "radosgw-admin realm list", "radosgw-admin realm list-periods", "radosgw-admin realm pull --url= URL_TO_MASTER_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin realm rename --rgw-realm= REALM_NAME --realm-new-name= NEW_REALM_NAME", "radosgw-admin realm rename --rgw-realm=test_realm --realm-new-name=test_realm2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME [--rgw-realm= REALM_NAME ] [--master]", "radosgw-admin zonegroup create --rgw-zonegroup=zonegroup1 --rgw-realm=test_realm --default", "zonegroup modify --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --rgw-zonegroup=zonegroup1", "radosgw-admin zonegroup default --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin zonegroup default --rgw-zonegroup=zonegroup2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup default --rgw-zonegroup=us", "radosgw-admin period update --commit", "radosgw-admin zonegroup add --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup rename --rgw-zonegroup= ZONE_GROUP_NAME --zonegroup-new-name= NEW_ZONE_GROUP_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup delete --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup list", "{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }", "radosgw-admin zonegroup get [--rgw-zonegroup= ZONE_GROUP_NAME ]", "{ \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" }", "radosgw-admin zonegroup set --infile zonegroup.json", "radosgw-admin period update --commit", "{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }", "radosgw-admin zonegroup-map set --infile zonegroupmap.json", "radosgw-admin period update --commit", "radosgw-admin zone create --rgw-zone= ZONE_NAME [--zonegroup= ZONE_GROUP_NAME ] [--endpoints= ENDPOINT_PORT [,<endpoint:port>] [--master] [--default] --access-key ACCESS_KEY --secret SECRET_KEY", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zone delete --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "ceph osd pool delete DELETED_ZONE_NAME .rgw.control DELETED_ZONE_NAME .rgw.control --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.data.root DELETED_ZONE_NAME .rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.log DELETED_ZONE_NAME .rgw.log --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.users.uid DELETED_ZONE_NAME .rgw.users.uid --yes-i-really-really-mean-it", "radosgw-admin zone modify [options] --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list>", "radosgw-admin period update --commit", "radosgw-admin zone list", "radosgw-admin zone get [--rgw-zone= ZONE_NAME ]", "{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\"}, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\"} } ] }", "radosgw-admin zone set --rgw-zone=test-zone --infile zone.json", "radosgw-admin period update --commit", "radosgw-admin zone rename --rgw-zone= ZONE_NAME --zone-new-name= NEW_ZONE_NAME", "radosgw-admin period update --commit", "firewall-cmd --zone=public --add-port=636/tcp firewall-cmd --zone=public --add-port=636/tcp --permanent", "certutil -d /etc/openldap/certs -A -t \"TC,,\" -n \"msad-frog-MSAD-FROG-CA\" -i /path/to/ldap.pem", "setsebool -P httpd_can_network_connect on", "chmod 644 /etc/openldap/certs/*", "ldapwhoami -H ldaps://rh-directory-server.example.com -d 9", "radosgw-admin metadata list user", "ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_ldap_secret /etc/bindpass", "service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_ldap_uri ldaps://:636 ceph config set client.rgw rgw_ldap_binddn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_searchdn \"ou=poc,dc=example,dc=local\" ceph config set client.rgw rgw_ldap_dnattr \"uid\" ceph config set client.rgw rgw_s3_auth_use_ldap true", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "\"objectclass=inetorgperson\"", "\"(&(uid=joe)(objectclass=inetorgperson))\"", "\"(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))\"", "export RGW_ACCESS_KEY_ID=\" USERNAME \"", "export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"", "radosgw-token --encode --ttype=ldap", "radosgw-token --encode --ttype=ad", "export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"", "cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =", "aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2", "radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }", "radosgw-admin metadata list user", "ldapsearch -x -D \"uid=ceph,ou=People,dc=example,dc=com\" -W -H ldaps://example.com -b \"ou=People,dc=example,dc=com\" -s sub 'uid=ceph'", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_ldap_secret /etc/bindpass", "service_type: rgw service_id: rgw.1 service_name: rgw.rgw.1 placement: label: rgw extra_container_args: - -v - /etc/bindpass:/etc/bindpass", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_ldap_uri ldaps://_FQDN_:636 ceph config set client.rgw rgw_ldap_binddn \"_BINDDN_\" ceph config set client.rgw rgw_ldap_searchdn \"_SEARCHDN_\" ceph config set client.rgw rgw_ldap_dnattr \"cn\" ceph config set client.rgw rgw_s3_auth_use_ldap true", "rgw_ldap_binddn \"uid=ceph,cn=users,cn=accounts,dc=example,dc=com\"", "rgw_ldap_searchdn \"cn=users,cn=accounts,dc=example,dc=com\"", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "export RGW_ACCESS_KEY_ID=\" USERNAME \"", "export RGW_SECRET_ACCESS_KEY=\" PASSWORD \"", "radosgw-token --encode --ttype=ldap", "radosgw-token --encode --ttype=ad", "export RGW_ACCESS_KEY_ID=\"ewogICAgIlJHV19UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAibGRhcCIsCiAgICAgICAgImlkIjogImNlcGgiLAogICAgICAgICJrZXkiOiAiODAwI0dvcmlsbGEiCiAgICB9Cn0K\"", "cat .aws/credentials [default] aws_access_key_id = ewogICaGbnjlwe9UT0tFTiI6IHsKICAgICAgICAidmVyc2lvbiI6IDEsCiAgICAgICAgInR5cGUiOiAiYWQiLAogICAgICAgICJpZCI6ICJjZXBoIiwKICAgICAgICAia2V5IjogInBhc3M0Q2VwaCIKICAgIH0KfQo= aws_secret_access_key =", "aws s3 ls --endpoint http://host03 2023-12-11 17:08:50 mybucket 2023-12-24 14:55:44 mybucket2", "radosgw-admin user info --uid dir1 { \"user_id\": \"dir1\", \"display_name\": \"dir1\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"ldap\", \"mfa_ids\": [] }", "openstack service create --name=swift --description=\"Swift Service\" object-store", "openstack endpoint create --region REGION_NAME swift admin \" URL \" openstack endpoint create --region REGION_NAME swift public \" URL \" openstack endpoint create --region REGION_NAME swift internal \" URL \"", "openstack endpoint create --region us-west swift admin \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift public \"http://radosgw.example.com:8080/swift/v1\" openstack endpoint create --region us-west swift internal \"http://radosgw.example.com:8080/swift/v1\"", "openstack endpoint list --service=swift", "openstack endpoint show ENDPOINT_ID", "mkdir /var/ceph/nss openssl x509 -in /etc/keystone/ssl/certs/ca.pem -pubkey | certutil -d /var/ceph/nss -A -n ca -t \"TCu,Cu,Tuw\" openssl x509 -in /etc/keystone/ssl/certs/signing_cert.pem -pubkey | certutil -A -d /var/ceph/nss -n signing_cert -t \"P,P,P\"", "ceph config set client.rgw nss_db_path \"/var/lib/ceph/radosgw/ceph-rgw.rgw01/nss\"", "ceph config set client.rgw rgw_keystone_verify_ssl TRUE / FALSE ceph config set client.rgw rgw_s3_auth_use_keystone TRUE / FALSE ceph config set client.rgw rgw_keystone_api_version API_VERSION ceph config set client.rgw rgw_keystone_url KEYSTONE_URL : ADMIN_PORT ceph config set client.rgw rgw_keystone_accepted_roles ACCEPTED_ROLES_ ceph config set client.rgw rgw_keystone_accepted_admin_roles ACCEPTED_ADMIN_ROLES ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project SERVICE_NAME ceph config set client.rgw rgw_keystone_admin_user KEYSTONE_TENANT_USER_NAME ceph config set client.rgw rgw_keystone_admin_password KEYSTONE_TENANT_USER_PASSWORD ceph config set client.rgw rgw_keystone_implicit_tenants KEYSTONE_IMPLICIT_TENANT_NAME ceph config set client.rgw rgw_swift_versioning_enabled TRUE / FALSE ceph config set client.rgw rgw_swift_enforce_content_length TRUE / FALSE ceph config set client.rgw rgw_swift_account_in_url TRUE / FALSE ceph config set client.rgw rgw_trust_forwarded_https TRUE / FALSE ceph config set client.rgw rgw_max_attr_name_len MAXIMUM_LENGTH_OF_METADATA_NAMES ceph config set client.rgw rgw_max_attrs_num_in_req MAXIMUM_NUMBER_OF_METADATA_ITEMS ceph config set client.rgw rgw_max_attr_size MAXIMUM_LENGTH_OF_METADATA_VALUE ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader", "ceph config set client.rgw rgw_keystone_verify_ssl false ceph config set client.rgw rgw_s3_auth_use_keystone true ceph config set client.rgw rgw_keystone_api_version 3 ceph config set client.rgw rgw_keystone_url http://<public Keystone endpoint>:5000/ ceph config set client.rgw rgw_keystone_accepted_roles 'member, Member, admin' ceph config set client.rgw rgw_keystone_accepted_admin_roles 'ResellerAdmin, swiftoperator' ceph config set client.rgw rgw_keystone_admin_domain default ceph config set client.rgw rgw_keystone_admin_project service ceph config set client.rgw rgw_keystone_admin_user swift ceph config set client.rgw rgw_keystone_admin_password password ceph config set client.rgw rgw_keystone_implicit_tenants true ceph config set client.rgw rgw_swift_versioning_enabled true ceph config set client.rgw rgw_swift_enforce_content_length true ceph config set client.rgw rgw_swift_account_in_url true ceph config set client.rgw rgw_trust_forwarded_https true ceph config set client.rgw rgw_max_attr_name_len 128 ceph config set client.rgw rgw_max_attrs_num_in_req 90 ceph config set client.rgw rgw_max_attr_size 1024 ceph config set client.rgw rgw_keystone_accepted_reader_roles SwiftSystemReader", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "grubby --update-kernel=ALL --args=\"intel_iommu=on\"", "dnf install -y qatlib-service qatlib qatzip qatengine", "usermod -aG qat root", "cat /etc/sysconfig/qat ServicesEnabled=asym POLICY=8", "cat /etc/sysconfig/qat ServicesEnabled=dc POLICY=8", "cat /etc/sysconfig/qat ServicesEnabled=asym,dc POLICY=8", "sudo vim /etc/security/limits.conf root - memlock 500000 ceph - memlock 500000", "sudo su -l USDUSER", "systemctl enable qat", "systemctl reboot", "service_type: rgw service_id: rgw_qat placement: label: rgw extra_container_args: - \"-v /etc/group:/etc/group:ro\" - \"--group-add=keep-groups\" - \"--cap-add=SYS_ADMIN\" - \"--cap-add=SYS_PTRACE\" - \"--cap-add=IPC_LOCK\" - \"--security-opt seccomp=unconfined\" - \"--ulimit memlock=209715200:209715200\" - \"--device=/dev/qat_adf_ctl:/dev/qat_adf_ctl\" - \"--device=/dev/vfio/vfio:/dev/vfio/vfio\" - \"--device=/dev/vfio/333:/dev/vfio/333\" - \"--device=/dev/vfio/334:/dev/vfio/334\" - \"--device=/dev/vfio/335:/dev/vfio/335\" - \"--device=/dev/vfio/336:/dev/vfio/336\" - \"--device=/dev/vfio/337:/dev/vfio/337\" - \"--device=/dev/vfio/338:/dev/vfio/338\" - \"--device=/dev/vfio/339:/dev/vfio/339\" - \"--device=/dev/vfio/340:/dev/vfio/340\" - \"--device=/dev/vfio/341:/dev/vfio/341\" - \"--device=/dev/vfio/342:/dev/vfio/342\" - \"--device=/dev/vfio/343:/dev/vfio/343\" - \"--device=/dev/vfio/344:/dev/vfio/344\" - \"--device=/dev/vfio/345:/dev/vfio/345\" - \"--device=/dev/vfio/346:/dev/vfio/346\" - \"--device=/dev/vfio/347:/dev/vfio/347\" - \"--device=/dev/vfio/348:/dev/vfio/348\" - \"--device=/dev/vfio/349:/dev/vfio/349\" - \"--device=/dev/vfio/350:/dev/vfio/350\" - \"--device=/dev/vfio/351:/dev/vfio/351\" - \"--device=/dev/vfio/352:/dev/vfio/352\" - \"--device=/dev/vfio/353:/dev/vfio/353\" - \"--device=/dev/vfio/354:/dev/vfio/354\" - \"--device=/dev/vfio/355:/dev/vfio/355\" - \"--device=/dev/vfio/356:/dev/vfio/356\" - \"--device=/dev/vfio/357:/dev/vfio/357\" - \"--device=/dev/vfio/358:/dev/vfio/358\" - \"--device=/dev/vfio/359:/dev/vfio/359\" - \"--device=/dev/vfio/360:/dev/vfio/360\" - \"--device=/dev/vfio/361:/dev/vfio/361\" - \"--device=/dev/vfio/362:/dev/vfio/362\" - \"--device=/dev/vfio/363:/dev/vfio/363\" - \"--device=/dev/vfio/364:/dev/vfio/364\" - \"--device=/dev/vfio/365:/dev/vfio/365\" - \"--device=/dev/vfio/366:/dev/vfio/366\" - \"--device=/dev/vfio/367:/dev/vfio/367\" - \"--device=/dev/vfio/368:/dev/vfio/368\" - \"--device=/dev/vfio/369:/dev/vfio/369\" - \"--device=/dev/vfio/370:/dev/vfio/370\" - \"--device=/dev/vfio/371:/dev/vfio/371\" - \"--device=/dev/vfio/372:/dev/vfio/372\" - \"--device=/dev/vfio/373:/dev/vfio/373\" - \"--device=/dev/vfio/374:/dev/vfio/374\" - \"--device=/dev/vfio/375:/dev/vfio/375\" - \"--device=/dev/vfio/376:/dev/vfio/376\" - \"--device=/dev/vfio/377:/dev/vfio/377\" - \"--device=/dev/vfio/378:/dev/vfio/378\" - \"--device=/dev/vfio/379:/dev/vfio/379\" - \"--device=/dev/vfio/380:/dev/vfio/380\" - \"--device=/dev/vfio/381:/dev/vfio/381\" - \"--device=/dev/vfio/382:/dev/vfio/382\" - \"--device=/dev/vfio/383:/dev/vfio/383\" - \"--device=/dev/vfio/384:/dev/vfio/384\" - \"--device=/dev/vfio/385:/dev/vfio/385\" - \"--device=/dev/vfio/386:/dev/vfio/386\" - \"--device=/dev/vfio/387:/dev/vfio/387\" - \"--device=/dev/vfio/388:/dev/vfio/388\" - \"--device=/dev/vfio/389:/dev/vfio/389\" - \"--device=/dev/vfio/390:/dev/vfio/390\" - \"--device=/dev/vfio/391:/dev/vfio/391\" - \"--device=/dev/vfio/392:/dev/vfio/392\" - \"--device=/dev/vfio/393:/dev/vfio/393\" - \"--device=/dev/vfio/394:/dev/vfio/394\" - \"--device=/dev/vfio/395:/dev/vfio/395\" - \"--device=/dev/vfio/396:/dev/vfio/396\" - \"--device=/dev/vfio/devices/vfio0:/dev/vfio/devices/vfio0\" - \"--device=/dev/vfio/devices/vfio1:/dev/vfio/devices/vfio1\" - \"--device=/dev/vfio/devices/vfio2:/dev/vfio/devices/vfio2\" - \"--device=/dev/vfio/devices/vfio3:/dev/vfio/devices/vfio3\" - \"--device=/dev/vfio/devices/vfio4:/dev/vfio/devices/vfio4\" - \"--device=/dev/vfio/devices/vfio5:/dev/vfio/devices/vfio5\" - \"--device=/dev/vfio/devices/vfio6:/dev/vfio/devices/vfio6\" - \"--device=/dev/vfio/devices/vfio7:/dev/vfio/devices/vfio7\" - \"--device=/dev/vfio/devices/vfio8:/dev/vfio/devices/vfio8\" - \"--device=/dev/vfio/devices/vfio9:/dev/vfio/devices/vfio9\" - \"--device=/dev/vfio/devices/vfio10:/dev/vfio/devices/vfio10\" - \"--device=/dev/vfio/devices/vfio11:/dev/vfio/devices/vfio11\" - \"--device=/dev/vfio/devices/vfio12:/dev/vfio/devices/vfio12\" - \"--device=/dev/vfio/devices/vfio13:/dev/vfio/devices/vfio13\" - \"--device=/dev/vfio/devices/vfio14:/dev/vfio/devices/vfio14\" - \"--device=/dev/vfio/devices/vfio15:/dev/vfio/devices/vfio15\" - \"--device=/dev/vfio/devices/vfio16:/dev/vfio/devices/vfio16\" - \"--device=/dev/vfio/devices/vfio17:/dev/vfio/devices/vfio17\" - \"--device=/dev/vfio/devices/vfio18:/dev/vfio/devices/vfio18\" - \"--device=/dev/vfio/devices/vfio19:/dev/vfio/devices/vfio19\" - \"--device=/dev/vfio/devices/vfio20:/dev/vfio/devices/vfio20\" - \"--device=/dev/vfio/devices/vfio21:/dev/vfio/devices/vfio21\" - \"--device=/dev/vfio/devices/vfio22:/dev/vfio/devices/vfio22\" - \"--device=/dev/vfio/devices/vfio23:/dev/vfio/devices/vfio23\" - \"--device=/dev/vfio/devices/vfio24:/dev/vfio/devices/vfio24\" - \"--device=/dev/vfio/devices/vfio25:/dev/vfio/devices/vfio25\" - \"--device=/dev/vfio/devices/vfio26:/dev/vfio/devices/vfio26\" - \"--device=/dev/vfio/devices/vfio27:/dev/vfio/devices/vfio27\" - \"--device=/dev/vfio/devices/vfio28:/dev/vfio/devices/vfio28\" - \"--device=/dev/vfio/devices/vfio29:/dev/vfio/devices/vfio29\" - \"--device=/dev/vfio/devices/vfio30:/dev/vfio/devices/vfio30\" - \"--device=/dev/vfio/devices/vfio31:/dev/vfio/devices/vfio31\" - \"--device=/dev/vfio/devices/vfio32:/dev/vfio/devices/vfio32\" - \"--device=/dev/vfio/devices/vfio33:/dev/vfio/devices/vfio33\" - \"--device=/dev/vfio/devices/vfio34:/dev/vfio/devices/vfio34\" - \"--device=/dev/vfio/devices/vfio35:/dev/vfio/devices/vfio35\" - \"--device=/dev/vfio/devices/vfio36:/dev/vfio/devices/vfio36\" - \"--device=/dev/vfio/devices/vfio37:/dev/vfio/devices/vfio37\" - \"--device=/dev/vfio/devices/vfio38:/dev/vfio/devices/vfio38\" - \"--device=/dev/vfio/devices/vfio39:/dev/vfio/devices/vfio39\" - \"--device=/dev/vfio/devices/vfio40:/dev/vfio/devices/vfio40\" - \"--device=/dev/vfio/devices/vfio41:/dev/vfio/devices/vfio41\" - \"--device=/dev/vfio/devices/vfio42:/dev/vfio/devices/vfio42\" - \"--device=/dev/vfio/devices/vfio43:/dev/vfio/devices/vfio43\" - \"--device=/dev/vfio/devices/vfio44:/dev/vfio/devices/vfio44\" - \"--device=/dev/vfio/devices/vfio45:/dev/vfio/devices/vfio45\" - \"--device=/dev/vfio/devices/vfio46:/dev/vfio/devices/vfio46\" - \"--device=/dev/vfio/devices/vfio47:/dev/vfio/devices/vfio47\" - \"--device=/dev/vfio/devices/vfio48:/dev/vfio/devices/vfio48\" - \"--device=/dev/vfio/devices/vfio49:/dev/vfio/devices/vfio49\" - \"--device=/dev/vfio/devices/vfio50:/dev/vfio/devices/vfio50\" - \"--device=/dev/vfio/devices/vfio51:/dev/vfio/devices/vfio51\" - \"--device=/dev/vfio/devices/vfio52:/dev/vfio/devices/vfio52\" - \"--device=/dev/vfio/devices/vfio53:/dev/vfio/devices/vfio53\" - \"--device=/dev/vfio/devices/vfio54:/dev/vfio/devices/vfio54\" - \"--device=/dev/vfio/devices/vfio55:/dev/vfio/devices/vfio55\" - \"--device=/dev/vfio/devices/vfio56:/dev/vfio/devices/vfio56\" - \"--device=/dev/vfio/devices/vfio57:/dev/vfio/devices/vfio57\" - \"--device=/dev/vfio/devices/vfio58:/dev/vfio/devices/vfio58\" - \"--device=/dev/vfio/devices/vfio59:/dev/vfio/devices/vfio59\" - \"--device=/dev/vfio/devices/vfio60:/dev/vfio/devices/vfio60\" - \"--device=/dev/vfio/devices/vfio61:/dev/vfio/devices/vfio61\" - \"--device=/dev/vfio/devices/vfio62:/dev/vfio/devices/vfio62\" - \"--device=/dev/vfio/devices/vfio63:/dev/vfio/devices/vfio63\" networks: - 172.17.8.0/24 spec: rgw_frontend_port: 8000", "plugin crypto accelerator = crypto_qat", "qat compressor enabled=true", "[user@client ~]USD vi bucket-encryption.json", "{ \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] }", "aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api put-bucket-encryption --bucket pass:q[_BUCKET_NAME_] --server-side-encryption-configuration pass:q[_file://PATH_TO_BUCKET_ENCRYPTION_CONFIGURATION_FILE/BUCKET_ENCRYPTION_CONFIGURATION_FILE.json_]", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-encryption --bucket testbucket --server-side-encryption-configuration file://bucket-encryption.json", "aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api get-bucket-encryption --bucket BUCKET_NAME", "[user@client ~]USD aws --profile ceph --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket { \"ServerSideEncryptionConfiguration\": { \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] } }", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-encryption --bucket BUCKET_NAME", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-encryption --bucket testbucket", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-encryption --bucket BUCKET_NAME", "[user@client ~]USD aws --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket An error occurred (ServerSideEncryptionConfigurationNotFoundError) when calling the GetBucketEncryption operation: The server side encryption configuration was not found", "frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check", "frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto https here we set the incoming HTTPS port on the load balancer (eg : 443) http-request set-header X-Forwarded-Port 443 default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check", "ceph config set client.rgw rgw_trust_forwarded_https true", "systemctl enable haproxy systemctl start haproxy", "ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=0", "ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=1", "ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=2", "vault policy write rgw-kv-policy -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF", "vault policy write rgw-transit-policy -<<EOF path \"transit/keys/*\" { capabilities = [ \"create\", \"update\" ] denied_parameters = {\"exportable\" = [], \"allow_plaintext_backup\" = [] } } path \"transit/keys/*\" { capabilities = [\"read\", \"delete\"] } path \"transit/keys/\" { capabilities = [\"list\"] } path \"transit/keys/+/rotate\" { capabilities = [ \"update\" ] } path \"transit/*\" { capabilities = [ \"update\" ] } EOF", "vault policy write old-rgw-transit-policy -<<EOF path \"transit/export/encryption-key/*\" { capabilities = [\"read\"] } EOF", "ceph config set client.rgw rgw_crypt_s3_kms_backend vault", "ceph config set client.rgw rgw_crypt_vault_auth agent ceph config set client.rgw rgw_crypt_vault_addr http:// VAULT_SERVER :8100", "vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.role_id > PATH_TO_FILE", "vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.secret_id > PATH_TO_FILE", "pid_file = \"/run/kv-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/root/vault_configs/kv-agent-role-id\" secret_id_file_path =\"/root/vault_configs/kv-agent-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"http://10.8.128.9:8200\" }", "/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl", "ceph config set client.rgw rgw_crypt_vault_secret_engine kv", "ceph config set client.rgw rgw_crypt_vault_secret_engine transit", "ceph config set client.rgw rgw_crypt_vault_namespace testnamespace1", "ceph config set client.rgw rgw_crypt_vault_prefix /v1/secret/data", "ceph config set client.rgw rgw_crypt_vault_prefix /v1/transit/export/encryption-key", "http://vault-server:8200/v1/transit/export/encryption-key", "systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "cephadm shell", "ceph config set client.rgw rgw_crypt_sse_s3_backend vault", "ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http:// VAULT_AGENT : VAULT_AGENT_PORT", "ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http://vaultagent:8100", "vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-role-id > PATH_TO_FILE", "vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-secret-id > PATH_TO_FILE", "pid_file = \"/run/rgw-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/usr/local/etc/vault/.rgw-ap-role-id\" secret_id_file_path =\"/usr/local/etc/vault/.rgw-ap-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"https://vaultserver:8200\" }", "/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl", "ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine kv", "ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine transit", "ceph config set client.rgw rgw_crypt_sse_s3_vault_namespace company/testnamespace1", "ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/secret/data", "ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/transit", "http://vaultserver:8200/v1/transit", "ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert PATH_TO_CA_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert PATH_TO_CLIENT_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey PATH_TO_PRIVATE_KEY", "ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert /etc/ceph/vault.ca ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert /etc/ceph/vault.crt ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey /etc/ceph/vault.key", "systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "vault secrets enable -path secret kv-v2", "vault kv put secret/ PROJECT_NAME / BUCKET_NAME key=USD(openssl rand -base64 32)", "vault kv put secret/myproject/mybucketkey key=USD(openssl rand -base64 32) ====== Metadata ====== Key Value --- ---- created_time 2020-02-21T17:01:09.095824999Z deletion_time n/a destroyed false version 1", "vault secrets enable transit", "vault write -f transit/keys/ BUCKET_NAME exportable=true", "vault write -f transit/keys/mybucketkey exportable=true", "vault read transit/export/encryption-key/ BUCKET_NAME / VERSION_NUMBER", "vault read transit/export/encryption-key/mybucketkey/1 Key Value --- ----- keys map[1:-gbTI9lNpqv/V/2lDcmH2Nq1xKn6FPDWarCmFM2aNsQ=] name mybucketkey type aes256-gcm96", "[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey", "[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256", "[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey", "[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256", "[user@host01 ~]USD SEED=USD(head -10 /dev/urandom | sha512sum | cut -b 1-30)", "[user@host01 ~]USD echo USDSEED 492dedb20cf51d1405ef6a1316017e", "radosgw-admin mfa create --uid= USERID --totp-serial= SERIAL --totp-seed= SEED --totp-seed-type= SEED_TYPE --totp-seconds= TOTP_SECONDS --totp-window= TOTP_WINDOW", "radosgw-admin mfa create --uid=johndoe --totp-serial=MFAtest --totp-seed=492dedb20cf51d1405ef6a1316017e", "radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN", "radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok", "radosgw-admin mfa resync --uid= USERID --totp-serial= SERIAL --totp-pin= PREVIOUS_PIN --totp=pin= CURRENT_PIN", "radosgw-admin mfa resync --uid=johndoe --totp-serial=MFAtest --totp-pin=802021 --totp-pin=439996", "radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN", "radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok", "radosgw-admin mfa list --uid= USERID", "radosgw-admin mfa list --uid=johndoe { \"entries\": [ { \"type\": 2, \"id\": \"MFAtest\", \"seed\": \"492dedb20cf51d1405ef6a1316017e\", \"seed_type\": \"hex\", \"time_ofs\": 0, \"step_size\": 30, \"window\": 2 } ] }", "radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL", "radosgw-admin mfa remove --uid= USERID --totp-serial= SERIAL", "radosgw-admin mfa remove --uid=johndoe --totp-serial=MFAtest", "radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL", "radosgw-admin mfa get --uid=johndoe --totp-serial=MFAtest MFA serial id not found", "radosgw-admin zonegroup --rgw-zonegroup= ZONE_GROUP_NAME get > FILE_NAME .json", "radosgw-admin zonegroup --rgw-zonegroup=default get > zonegroup.json", "{ \"name\": \"default\", \"api_name\": \"\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"master_zone\": \"\", \"zones\": [{ \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 5 }], \"placement_targets\": [{ \"name\": \"default-placement\", \"tags\": [] }, { \"name\": \"special-placement\", \"tags\": [] }], \"default_placement\": \"default-placement\" }", "radosgw-admin zonegroup set < zonegroup.json", "radosgw-admin zone get > zone.json", "{ \"domain_root\": \".rgw\", \"control_pool\": \".rgw.control\", \"gc_pool\": \".rgw.gc\", \"log_pool\": \".log\", \"intent_log_pool\": \".intent-log\", \"usage_log_pool\": \".usage\", \"user_keys_pool\": \".users\", \"user_email_pool\": \".users.email\", \"user_swift_pool\": \".users.swift\", \"user_uid_pool\": \".users.uid\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [{ \"key\": \"default-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets\", \"data_extra_pool\": \".rgw.buckets.extra\" } }, { \"key\": \"special-placement\", \"val\": { \"index_pool\": \".rgw.buckets.index\", \"data_pool\": \".rgw.buckets.special\", \"data_extra_pool\": \".rgw.buckets.extra\" } }] }", "radosgw-admin zone set < zone.json", "radosgw-admin period update --commit", "curl -i http://10.0.0.1/swift/v1/TestContainer/file.txt -X PUT -H \"X-Storage-Policy: special-placement\" -H \"X-Auth-Token: AUTH_rgwtxxxxxx\"", "radosgw-admin zonegroup placement add --rgw-zonegroup=\"default\" --placement-id=\"indexless-placement\"", "radosgw-admin zone placement add --rgw-zone=\"default\" --placement-id=\"indexless-placement\" --data-pool=\"default.rgw.buckets.data\" --index-pool=\"default.rgw.buckets.index\" --data_extra_pool=\"default.rgw.buckets.non-ec\" --placement-index-type=\"indexless\"", "radosgw-admin zonegroup placement default --placement-id \"indexless-placement\"", "radosgw-admin period update --commit", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "ln: failed to access '/tmp/rgwrbi-object-list.4053207': No such file or directory", "/usr/bin/rgw-restore-bucket-index -b bucket-large-1 -p local-zone.rgw.buckets.data marker is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 bucket_id is d8a347a4-99b6-4312-a5c1-75b83904b3d4.41610.2 number of bucket index shards is 5 data pool is local-zone.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. The list of objects that we will attempt to restore can be found in \"/tmp/rgwrbi-object-list.49946\". Please review the object names in that file (either below or in another window/terminal) before proceeding. Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: view Viewing Type \"proceed!\" to proceed, \"view\" to view object list, or \"q\" to quit: proceed! Proceeding NOTICE: Bucket stats are currently incorrect. They can be restored with the following command after 2 minutes: radosgw-admin bucket list --bucket=bucket-large-1 --allow-unordered --max-entries=1073741824 Would you like to take the time to recalculate bucket stats now? [yes/no] yes Done real 2m16.530s user 0m1.082s sys 0m0.870s", "time rgw-restore-bucket-index --proceed serp-bu-ver-1 default.rgw.buckets.data NOTICE: This tool is currently considered EXPERIMENTAL. marker is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 bucket_id is e871fb65-b87f-4c16-a7c3-064b66feb1c4.25076.5 Error: this bucket appears to be versioned, and this tool cannot work with versioned buckets.", "Bucket _BUCKET_NAME_ already has too many log generations (4) from previous reshards that peer zones haven't finished syncing. Resharding is not recommended until the old generations sync, but you can force a reshard with `--yes-i-really-mean-it`.", "number of objects expected in a bucket / 100,000", "ceph config set client.rgw rgw_override_bucket_index_max_shards VALUE", "ceph config set client.rgw rgw_override_bucket_index_max_shards 12", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "number of objects expected in a bucket / 100,000", "radosgw-admin zonegroup get > zonegroup.json", "bucket_index_max_shards = VALUE", "bucket_index_max_shards = 12", "radosgw-admin zonegroup set < zonegroup.json", "radosgw-admin period update --commit", "radosgw-admin reshard status --bucket BUCKET_NAME", "radosgw-admin reshard status --bucket data", "radosgw-admin sync status", "radosgw-admin period get", "ceph config set client.rgw OPTION VALUE", "ceph config set client.rgw rgw_reshard_num_logs 23", "radosgw-admin reshard add --bucket BUCKET --num-shards NUMBER", "radosgw-admin reshard add --bucket data --num-shards 10", "radosgw-admin reshard list", "radosgw-admin bucket layout --bucket data { \"layout\": { \"resharding\": \"None\", \"current_index\": { \"gen\": 1, \"layout\": { \"type\": \"Normal\", \"normal\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } }, \"logs\": [ { \"gen\": 0, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 0, \"layout\": { \"num_shards\": 11, \"hash_type\": \"Mod\" } } } }, { \"gen\": 1, \"layout\": { \"type\": \"InIndex\", \"in_index\": { \"gen\": 1, \"layout\": { \"num_shards\": 23, \"hash_type\": \"Mod\" } } } } ] } }", "radosgw-admin reshard status --bucket BUCKET", "radosgw-admin reshard status --bucket data", "radosgw-admin reshard process", "radosgw-admin reshard cancel --bucket BUCKET", "radosgw-admin reshard cancel --bucket data", "radosgw-admin reshard status --bucket BUCKET", "radosgw-admin reshard status --bucket data", "radosgw-admin sync status", "radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --enable-feature=resharding", "radosgw-admin zonegroup modify --rgw-zonegroup=us --enable-feature=resharding", "radosgw-admin period update --commit", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --enable-feature=resharding", "radosgw-admin zone modify --rgw-zone=us-east --enable-feature=resharding", "radosgw-admin period update --commit", "radosgw-admin period get \"zones\": [ { \"id\": \"505b48db-6de0-45d5-8208-8c98f7b1278d\", \"name\": \"us_east\", \"endpoints\": [ \"http://10.0.208.11:8080\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\", \"supported_features\": [ \"resharding\" ] \"default_placement\": \"default-placement\", \"realm_id\": \"26cf6f23-c3a0-4d57-aae4-9b0010ee55cc\", \"sync_policy\": { \"groups\": [] }, \"enabled_features\": [ \"resharding\" ]", "radosgw-admin sync status realm 26cf6f23-c3a0-4d57-aae4-9b0010ee55cc (usa) zonegroup 33a17718-6c77-493e-99fe-048d3110a06e (us) zone 505b48db-6de0-45d5-8208-8c98f7b1278d (us_east) zonegroup features enabled: resharding", "radosgw-admin zonegroup modify --rgw-zonegroup= ZONEGROUP_NAME --disable-feature=resharding", "radosgw-admin zonegroup modify --rgw-zonegroup=us --disable-feature=resharding", "radosgw-admin period update --commit", "radosgw-admin bi list --bucket= BUCKET > BUCKET .list.backup", "radosgw-admin bi list --bucket=data > data.list.backup", "radosgw-admin bucket reshard --bucket= BUCKET --num-shards= NUMBER", "radosgw-admin bucket reshard --bucket=data --num-shards=100", "radosgw-admin reshard status --bucket bucket", "radosgw-admin reshard status --bucket data", "radosgw-admin reshard stale-instances list", "radosgw-admin reshard stale-instances rm", "radosgw-admin reshard status --bucket BUCKET", "radosgw-admin reshard status --bucket data", "[root@host01 ~] radosgw-admin zone placement modify --rgw-zone=default --placement-id=default-placement --compression=zlib { \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"data_pool\": \"default.rgw.buckets.data\", \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0, \"compression\": \"zlib\" } } ], }", "radosgw-admin bucket stats --bucket= BUCKET_NAME { \"usage\": { \"rgw.main\": { \"size\": 1075028, \"size_actual\": 1331200, \"size_utilized\": 592035, \"size_kb\": 1050, \"size_kb_actual\": 1300, \"size_kb_utilized\": 579, \"num_objects\": 104 } }, }", "radosgw-admin user <create|modify|info|rm|suspend|enable|check|stats> <--uid= USER_ID |--subuser= SUB_USER_NAME > [other-options]", "radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --access_key TESTER --secret test123 user create", "radosgw-admin --tenant testx --uid tester --display-name \"Test User\" --subuser tester:swift --key-type swift --access full subuser create radosgw-admin key create --subuser 'testxUSDtester:swift' --key-type swift --secret test123", "radosgw-admin user create --uid= USER_ID [--key-type= KEY_TYPE ] [--gen-access-key|--access-key= ACCESS_KEY ] [--gen-secret | --secret= SECRET_KEY ] [--email= EMAIL ] --display-name= DISPLAY_NAME", "radosgw-admin user create --uid=janedoe --access-key=11BS02LGFB6AL6H1ADMW --secret=vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY [email protected] --display-name=Jane Doe", "{ \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}", "radosgw-admin subuser create --uid= USER_ID --subuser= SUB_USER_ID --access=[ read | write | readwrite | full ]", "radosgw-admin subuser create --uid=janedoe --subuser=janedoe:swift --access=full { \"user_id\": \"janedoe\", \"display_name\": \"Jane Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"janedoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"janedoe\", \"access_key\": \"11BS02LGFB6AL6H1ADMW\", \"secret_key\": \"vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY\"}], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1}, \"temp_url_keys\": []}", "radosgw-admin user info --uid=janedoe", "radosgw-admin user info --uid=janedoe --tenant=test", "radosgw-admin user modify --uid=janedoe --display-name=\"Jane E. Doe\"", "radosgw-admin subuser modify --subuser=janedoe:swift --access=full", "radosgw-admin user suspend --uid=johndoe", "radosgw-admin user enable --uid=johndoe", "radosgw-admin user rm --uid= USER_ID [--purge-keys] [--purge-data]", "radosgw-admin user rm --uid=johndoe --purge-data", "radosgw-admin subuser rm --subuser=johndoe:swift --purge-keys", "radosgw-admin subuser rm --subuser= SUB_USER_ID", "radosgw-admin subuser rm --subuser=johndoe:swift", "radosgw-admin user rename --uid= CURRENT_USER_NAME --new-uid= NEW_USER_NAME", "radosgw-admin user rename --uid=user1 --new-uid=user2 { \"user_id\": \"user2\", \"display_name\": \"user 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"user2\", \"access_key\": \"59EKHI6AI9F8WOW8JQZJ\", \"secret_key\": \"XH0uY3rKCUcuL73X0ftjXbZqUbk0cavD11rD8MsA\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin user rename --uid USER_NAME --new-uid NEW_USER_NAME --tenant TENANT", "radosgw-admin user rename --uid=testUSDuser1 --new-uid=testUSDuser2 --tenant test 1000 objects processed in tvtester1. Next marker 80_tVtester1_99 2000 objects processed in tvtester1. Next marker 64_tVtester1_44 3000 objects processed in tvtester1. Next marker 48_tVtester1_28 4000 objects processed in tvtester1. Next marker 2_tVtester1_74 5000 objects processed in tvtester1. Next marker 14_tVtester1_53 6000 objects processed in tvtester1. Next marker 87_tVtester1_61 7000 objects processed in tvtester1. Next marker 6_tVtester1_57 8000 objects processed in tvtester1. Next marker 52_tVtester1_91 9000 objects processed in tvtester1. Next marker 34_tVtester1_74 9900 objects processed in tvtester1. Next marker 9_tVtester1_95 1000 objects processed in tvtester2. Next marker 82_tVtester2_93 2000 objects processed in tvtester2. Next marker 64_tVtester2_9 3000 objects processed in tvtester2. Next marker 48_tVtester2_22 4000 objects processed in tvtester2. Next marker 32_tVtester2_42 5000 objects processed in tvtester2. Next marker 16_tVtester2_36 6000 objects processed in tvtester2. Next marker 89_tVtester2_46 7000 objects processed in tvtester2. Next marker 70_tVtester2_78 8000 objects processed in tvtester2. Next marker 51_tVtester2_41 9000 objects processed in tvtester2. Next marker 33_tVtester2_32 9900 objects processed in tvtester2. Next marker 9_tVtester2_83 { \"user_id\": \"testUSDuser2\", \"display_name\": \"User 2\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testUSDuser2\", \"access_key\": \"user2\", \"secret_key\": \"123456789\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin user info --uid= NEW_USER_NAME", "radosgw-admin user info --uid=user2", "radosgw-admin user info --uid= TENANT USD USER_NAME", "radosgw-admin user info --uid=testUSDuser2", "radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret { \"user_id\": \"johndoe\", \"rados_uid\": 0, \"display_name\": \"John Doe\", \"email\": \"[email protected]\", \"suspended\": 0, \"subusers\": [ { \"id\": \"johndoe:swift\", \"permissions\": \"full-control\"}], \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"QFAMEDSJP5DEKJO0DDXY\", \"secret_key\": \"iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87\"}], \"swift_keys\": [ { \"user\": \"johndoe:swift\", \"secret_key\": \"E9T2rUZNu2gxUjcwUBO8n\\/Ev4KX6\\/GprEuH4qhu1\"}]}", "radosgw-admin key create --uid=johndoe --key-type=s3 --gen-access-key --gen-secret", "radosgw-admin user info --uid=johndoe", "radosgw-admin user info --uid=johndoe { \"user_id\": \"johndoe\", \"keys\": [ { \"user\": \"johndoe\", \"access_key\": \"0555b35654ad1656d804\", \"secret_key\": \"h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==\" } ], }", "radosgw-admin key rm --uid= USER_ID --access-key ACCESS_KEY", "radosgw-admin key rm --uid=johndoe --access-key 0555b35654ad1656d804", "radosgw-admin caps add --uid= USER_ID --caps= CAPS", "--caps=\"[users|buckets|metadata|usage|zone]=[*|read|write|read, write]\"", "radosgw-admin caps add --uid=johndoe --caps=\"users=*\"", "radosgw-admin caps remove --uid=johndoe --caps={caps}", "radosgw-admin role create --role-name= ROLE_NAME [--path==\" PATH_TO_FILE \"] [--assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT ]", "radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }", "radosgw-admin role get --role-name= ROLE_NAME", "radosgw-admin role get --role-name=S3Access1 { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }", "radosgw-admin role list", "radosgw-admin role list [ { \"RoleId\": \"85fb46dd-a88a-4233-96f5-4fb54f4353f7\", \"RoleName\": \"kvm-sts\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts\", \"CreateDate\": \"2022-09-13T11:55:09.39Z\", \"MaxSessionDuration\": 7200, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }, { \"RoleId\": \"9116218d-4e85-4413-b28d-cdfafba24794\", \"RoleName\": \"kvm-sts-1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/kvm-sts-1\", \"CreateDate\": \"2022-09-16T00:05:57.483Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]", "radosgw-admin role-trust-policy modify --role-name= ROLE_NAME --assume-role-policy-doc= TRUST_RELATIONSHIP_POLICY_DOCUMENT", "radosgw-admin role-trust-policy modify --role-name=S3Access1 --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\} { \"RoleId\": \"ca43045c-082c-491a-8af1-2eebca13deec\", \"RoleName\": \"S3Access1\", \"Path\": \"/application_abc/component_xyz/\", \"Arn\": \"arn:aws:iam:::role/application_abc/component_xyz/S3Access1\", \"CreateDate\": \"2022-06-17T10:18:29.116Z\", \"MaxSessionDuration\": 3600, \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/TESTER\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" }", "radosgw-admin role-policy get --role-name= ROLE_NAME --policy-name= POLICY_NAME", "radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1 { \"Permission policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":[\\\"s3:*\\\"],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"}]}\" }", "radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME", "radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1", "radosgw-admin role delete --role-name= ROLE_NAME", "radosgw-admin role delete --role-name=S3Access1", "radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOCUMENT", "radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}", "radosgw-admin role-policy list --role-name= ROLE_NAME", "radosgw-admin role-policy list --role-name=S3Access1 [ \"Policy1\" ]", "radosgw-admin role policy delete --role-name= ROLE_NAME --policy-name= POLICY_NAME", "radosgw-admin role policy delete --role-name=S3Access1 --policy-name=Policy1", "radosgw-admin role update --role-name= ROLE_NAME --max-session-duration=7200", "radosgw-admin role update --role-name=test-sts-role --max-session-duration=7200", "radosgw-admin role list [ { \"RoleId\": \"d4caf33f-caba-42f3-8bd4-48c84b4ea4d3\", \"RoleName\": \"test-sts-role\", \"Path\": \"/\", \"Arn\": \"arn:aws:iam:::role/test-role\", \"CreateDate\": \"2022-09-07T20:01:15.563Z\", \"MaxSessionDuration\": 7200, <<<<<< \"AssumeRolePolicyDocument\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"AWS\\\":[\\\"arn:aws:iam:::user/kvm\\\"]},\\\"Action\\\":[\\\"sts:AssumeRole\\\"]}]}\" } ]", "radosgw-admin quota set --quota-scope=user --uid= USER_ID [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]", "radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024", "radosgw-admin quota enable --quota-scope=user --uid= USER_ID", "radosgw-admin quota disable --quota-scope=user --uid= USER_ID", "radosgw-admin quota set --uid= USER_ID --quota-scope=bucket --bucket= BUCKET_NAME [--max-objects= NUMBER_OF_OBJECTS ] [--max-size= MAXIMUM_SIZE_IN_BYTES ]", "radosgw-admin quota enable --quota-scope=bucket --uid= USER_ID", "radosgw-admin quota disable --quota-scope=bucket --uid= USER_ID", "radosgw-admin user info --uid= USER_ID", "radosgw-admin user info --uid= USER_ID --tenant= TENANT", "radosgw-admin user stats --uid= USER_ID --sync-stats", "radosgw-admin user stats --uid= USER_ID", "radosgw-admin global quota get", "radosgw-admin global quota set --quota-scope bucket --max-objects 1024 radosgw-admin global quota enable --quota-scope bucket", "radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]", "radosgw-admin bucket link --bucket= ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= USER_ID", "radosgw-admin bucket link --bucket=s3bucket1 --bucket-new-name=s3newb --uid=testuser", "radosgw-admin bucket link --bucket= tenant / ORIGINAL_NAME --bucket-new-name= NEW_NAME --uid= TENANT USD USER_ID", "radosgw-admin bucket link --bucket=test/s3bucket1 --bucket-new-name=s3newb --uid=testUSDtestuser", "radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"s3newb\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]", "radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"s3bucket1\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]", "radosgw-admin bucket rm --bucket= BUCKET_NAME", "radosgw-admin bucket rm --bucket=s3bucket1", "radosgw-admin bucket rm --bucket= BUCKET --purge-objects --bypass-gc", "radosgw-admin bucket rm --bucket=s3bucket1 --purge-objects --bypass-gc", "radosgw-admin bucket list [ \"34150b2e9174475db8e191c188e920f6/swcontainer\", \"34150b2e9174475db8e191c188e920f6/swimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/ec2container\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten1\", \"c278edd68cfb4705bb3e07837c7ad1a8/demo-ct\", \"c278edd68cfb4705bb3e07837c7ad1a8/demopostup\", \"34150b2e9174475db8e191c188e920f6/postimpfalse\", \"c278edd68cfb4705bb3e07837c7ad1a8/demoten2\", \"c278edd68cfb4705bb3e07837c7ad1a8/postupsw\" ]", "radosgw-admin bucket link --uid= USER --bucket= BUCKET", "radosgw-admin bucket link --uid=user2 --bucket=data", "radosgw-admin bucket list --uid=user2 [ \"data\" ]", "radosgw-admin bucket chown --uid= user --bucket= bucket", "radosgw-admin bucket chown --uid=user2 --bucket=data", "radosgw-admin bucket list --bucket=data", "radosgw-admin bucket link --bucket= CURRENT_TENANT / BUCKET --uid= NEW_TENANT USD USER", "radosgw-admin bucket link --bucket=test/data --uid=test2USDuser2", "radosgw-admin bucket list --uid=testUSDuser2 [ \"data\" ]", "radosgw-admin bucket chown --bucket= NEW_TENANT / BUCKET --uid= NEW_TENANT USD USER", "radosgw-admin bucket chown --bucket='test2/data' --uid='testUSDtuser2'", "radosgw-admin bucket list --bucket=test2/data", "ceph config set client.rgw rgw_keystone_implicit_tenants true", "swift list", "s3cmd ls", "radosgw-admin bucket link --bucket=/ BUCKET --uid=' TENANT USD USER '", "radosgw-admin bucket link --bucket=/data --uid='testUSDtenanted-user'", "radosgw-admin bucket list --uid='testUSDtenanted-user' [ \"data\" ]", "radosgw-admin bucket chown --bucket=' tenant / bucket name ' --uid=' tenant USD user '", "radosgw-admin bucket chown --bucket='test/data' --uid='testUSDtenanted-user'", "radosgw-admin bucket list --bucket=test/data", "radosgw-admin bucket radoslist --bucket BUCKET_NAME", "radosgw-admin bucket radoslist --bucket mybucket", "head /usr/bin/rgw-orphan-list", "mkdir orphans", "cd orphans", "rgw-orphan-list", "Available pools: .rgw.root default.rgw.control default.rgw.meta default.rgw.log default.rgw.buckets.index default.rgw.buckets.data rbd default.rgw.buckets.non-ec ma.rgw.control ma.rgw.meta ma.rgw.log ma.rgw.buckets.index ma.rgw.buckets.data ma.rgw.buckets.non-ec Which pool do you want to search for orphans?", "rgw-orphan-list -h rgw-orphan-list POOL_NAME / DIRECTORY", "rgw-orphan-list default.rgw.buckets.data /orphans 2023-09-12 08:41:14 ceph-host01 Computing delta 2023-09-12 08:41:14 ceph-host01 Computing results 10 potential orphans found out of a possible 2412 (0%). <<<<<<< orphans detected The results can be found in './orphan-list-20230912124113.out'. Intermediate files are './rados-20230912124113.intermediate' and './radosgw-admin-20230912124113.intermediate'. *** *** WARNING: This is EXPERIMENTAL code and the results should be used *** only with CAUTION! *** Done at 2023-09-12 08:41:14.", "ls -l -rw-r--r--. 1 root root 770 Sep 12 03:59 orphan-list-20230912075939.out -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.error -rw-r--r--. 1 root root 248508 Sep 12 03:59 rados-20230912075939.intermediate -rw-r--r--. 1 root root 0 Sep 12 03:59 rados-20230912075939.issues -rw-r--r--. 1 root root 0 Sep 12 03:59 radosgw-admin-20230912075939.error -rw-r--r--. 1 root root 247738 Sep 12 03:59 radosgw-admin-20230912075939.intermediate", "cat ./orphan-list-20230912124113.out a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.0 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.1 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.2 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.3 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.4 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.5 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.6 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.7 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.8 a9c042bc-be24-412c-9052-dda6b2f01f55.16749.1_key1.cherylf.433-bucky-4865-0.9", "rados -p POOL_NAME rm OBJECT_NAME", "rados -p default.rgw.buckets.data rm myobject", "radosgw-admin bucket check --bucket= BUCKET_NAME", "radosgw-admin bucket check --bucket=mybucket", "radosgw-admin bucket check --fix --bucket= BUCKET_NAME", "radosgw-admin bucket check --fix --bucket=mybucket", "radosgw-admin topic list", "radosgw-admin topic get --topic=topic1", "radosgw-admin topic rm --topic=topic1", "client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})", "{ \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"String\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Enabled\"|\"Disabled\" }, \"Destination\": { \"Bucket\": \"BUCKET_NAME\" } } ] }", "cat replication.json { \"Role\": \"arn:aws:iam::account-id:role/role-name\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"DeleteMarkerReplication\": { \"Status\": \"Disabled\" }, \"Destination\": { \"Bucket\": \"testbucket\" } } ] }", "aws --endpoint-url=RADOSGW_ENDPOINT_URL s3api put-bucket-replication --bucket BUCKET_NAME --replication-configuration file://REPLICATION_CONFIIRATION_FILE.json", "aws --endpoint-url=http://host01:80 s3api put-bucket-replication --bucket testbucket --replication-configuration file://replication.json", "radosgw-admin sync policy get --bucket BUCKET_NAME", "radosgw-admin sync policy get --bucket testbucket { \"groups\": [ { \"id\": \"s3-bucket-replication:disabled\", \"data_flow\": {}, \"pipes\": [], \"status\": \"allowed\" }, { \"id\": \"s3-bucket-replication:enabled\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"testbucket\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": {}, \"dest\": {}, \"priority\": 1, \"mode\": \"user\", \"user\": \"s3cmd\" } } ], \"status\": \"enabled\" } ] }", "aws s3api get-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL", "aws s3api get-bucket-replication --bucket testbucket --endpoint-url=http://host01:80 { \"ReplicationConfiguration\": { \"Role\": \"\", \"Rules\": [ { \"ID\": \"pipe-bkt\", \"Status\": \"Enabled\", \"Priority\": 1, \"Destination\": { Bucket\": \"testbucket\" } } ] } }", "aws s3api delete-bucket-replication --bucket BUCKET_NAME --endpoint-url=RADOSGW_ENDPOINT_URL", "aws s3api delete-bucket-replication --bucket testbucket --endpoint-url=http://host01:80", "radosgw-admin sync policy get --bucket=BUCKET_NAME", "radosgw-admin sync policy get --bucket=testbucket", "cat user_policy.json { \"Version\":\"2012-10-17\", \"Statement\": { \"Effect\":\"Deny\", \"Action\": [ \"s3:PutReplicationConfiguration\", \"s3:GetReplicationConfiguration\", \"s3:DeleteReplicationConfiguration\" ], \"Resource\": \"arn:aws:s3:::*\", } }", "aws --endpoint-url=ENDPOINT_URL iam put-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --policy-document POLICY_DOCUMENT_PATH", "aws --endpoint-url=http://host01:80 iam put-user-policy --user-name newuser1 --policy-name userpolicy --policy-document file://user_policy.json", "aws --endpoint-url=ENDPOINT_URL iam get-user-policy --user-name USER_NAME --policy-name USER_POLICY_NAME --region us", "aws --endpoint-url=http://host01:80 iam get-user-policy --user-name newuser1 --policy-name userpolicy --region us", "[user@client ~]USD vi lifecycle.json", "{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" } ] }", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }", "radosgw-admin lc get --bucket= BUCKET_NAME", "radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-lifecycle --bucket BUCKET_NAME", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-lifecycle --bucket testbucket", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME", "aws --endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket", "radosgw-admin lc get --bucket= BUCKET_NAME", "radosgw-admin lc get --bucket=testbucket", "[user@client ~]USD vi lifecycle.json", "{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\" }, { \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\", \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\" } ] }", "aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api put-bucket-lifecycle-configuration --bucket BUCKET_NAME --lifecycle-configuration file:// PATH_TO_LIFECYCLE_CONFIGURATION_FILE / LIFECYCLE_CONFIGURATION_FILE .json", "[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-lifecycle-configuration --bucket testbucket --lifecycle-configuration file://lifecycle.json", "aws --endpointurl= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-lifecycle-configuration --bucket BUCKET_NAME", "[user@client ~]USD aws -endpoint-url=http://host01:80 s3api get-bucket-lifecycle-configuration --bucket testbucket { \"Rules\": [ { \"Expiration\": { \"Days\": 30 }, \"ID\": \"DocsExpiration\", \"Filter\": { \"Prefix\": \"docs/\" }, \"Status\": \"Enabled\" }, { \"Expiration\": { \"Days\": 1 }, \"ID\": \"ImageExpiration\", \"Filter\": { \"Prefix\": \"images/\" }, \"Status\": \"Enabled\" } ] }", "radosgw-admin lc get --bucket= BUCKET_NAME", "radosgw-admin lc get --bucket=testbucket { \"prefix_map\": { \"docs/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} }, \"images/\": { \"status\": true, \"dm_expiration\": false, \"expiration\": 1, \"noncur_expiration\": 0, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"DocsExpiration\", \"rule\": { \"id\": \"DocsExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"30\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"docs/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } }, { \"id\": \"ImageExpiration\", \"rule\": { \"id\": \"ImageExpiration\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"1\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"images/\", \"obj_tags\": { \"tagset\": {} } }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": false } } ] }", "cephadm shell", "radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" }, { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\" : \"UNINITIAL\" } ]", "radosgw-admin lc process --bucket= BUCKET_NAME", "radosgw-admin lc process --bucket=testbucket1", "radosgw-admin lc process", "radosgw-admin lc list [ { \"bucket\": \":testbucket:8b63d584-9ea1-4cf3-8443-a6a15beca943.54187.1\", \"started\": \"Thu, 17 Mar 2022 21:48:50 GMT\", \"status\" : \"COMPLETE\" } { \"bucket\": \":testbucket1:8b635499-9e41-4cf3-8443-a6a15345943.54187.2\", \"started\": \"Thu, 17 Mar 2022 20:38:50 GMT\", \"status\" : \"COMPLETE\" } ]", "cephadm shell", "ceph config set client.rgw rgw_lifecycle_work_time %D:%D-%D:%D", "ceph config set client.rgw rgw_lifecycle_work_time 06:00-08:00", "ceph config get client.rgw rgw_lifecycle_work_time 06:00-08:00", "ceph osd pool create POOL_NAME", "ceph osd pool create test.hot.data", "radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS", "radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class hot.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"hot.test\" ] } }", "radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL", "radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class hot.test --data-pool test.hot.data { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"test_zone.rgw.buckets.index\", \"storage_classes\": { \"STANDARD\": { \"data_pool\": \"test.hot.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\", } }, \"data_extra_pool\": \"\", \"index_type\": 0 }", "ceph osd pool application enable POOL_NAME rgw", "ceph osd pool application enable test.hot.data rgw enabled application 'rgw' on pool 'test.hot.data'", "aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080", "aws --endpoint=http://10.0.0.80:8080 s3api put-object --bucket testbucket10 --key compliance-upload --body /root/test2.txt", "ceph osd pool create POOL_NAME", "ceph osd pool create test.cold.data", "radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS", "radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class cold.test { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"STANDARD\", \"cold.test\" ] } }", "radosgw-admin zone placement add --rgw-zone default --placement-id PLACEMENT_TARGET --storage-class STORAGE_CLASS --data-pool DATA_POOL", "radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class cold.test --data-pool test.cold.data", "ceph osd pool application enable POOL_NAME rgw", "ceph osd pool application enable test.cold.data rgw enabled application 'rgw' on pool 'test.cold.data'", "radosgw-admin zonegroup get { \"id\": \"3019de59-ddde-4c5c-b532-7cdd29de09a1\", \"name\": \"default\", \"api_name\": \"default\", \"is_master\": \"true\", \"endpoints\": [], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"zones\": [ { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"endpoints\": [], \"log_meta\": \"false\", \"log_data\": \"false\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\", \"tier_type\": \"\", \"sync_from_all\": \"true\", \"sync_from\": [], \"redirect_zone\": \"\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"hot.test\", \"cold.test\", \"STANDARD\" ] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"\", \"sync_policy\": { \"groups\": [] } }", "radosgw-admin zone get { \"id\": \"adacbe1b-02b4-41b8-b11d-0d505b442ed4\", \"name\": \"default\", \"domain_root\": \"default.rgw.meta:root\", \"control_pool\": \"default.rgw.control\", \"gc_pool\": \"default.rgw.log:gc\", \"lc_pool\": \"default.rgw.log:lc\", \"log_pool\": \"default.rgw.log\", \"intent_log_pool\": \"default.rgw.log:intent\", \"usage_log_pool\": \"default.rgw.log:usage\", \"roles_pool\": \"default.rgw.meta:roles\", \"reshard_pool\": \"default.rgw.log:reshard\", \"user_keys_pool\": \"default.rgw.meta:users.keys\", \"user_email_pool\": \"default.rgw.meta:users.email\", \"user_swift_pool\": \"default.rgw.meta:users.swift\", \"user_uid_pool\": \"default.rgw.meta:users.uid\", \"otp_pool\": \"default.rgw.otp\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"default.rgw.buckets.index\", \"storage_classes\": { \"cold.test\": { \"data_pool\": \"test.cold.data\" }, \"hot.test\": { \"data_pool\": \"test.hot.data\" }, \"STANDARD\": { \"data_pool\": \"default.rgw.buckets.data\" } }, \"data_extra_pool\": \"default.rgw.buckets.non-ec\", \"index_type\": 0 } } ], \"realm_id\": \"\", \"notif_pool\": \"default.rgw.log:notif\" }", "aws s3api create-bucket --bucket testbucket10 --create-bucket-configuration LocationConstraint=default:default-placement --endpoint-url http://10.0.0.80:8080", "radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"STANDARD\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }", "vi lifecycle.json", "{ \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 5, \"StorageClass\": \"hot.test\" }, { \"Days\": 20, \"StorageClass\": \"cold.test\" } ], \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\" } ] }", "aws s3api put-bucket-lifecycle-configuration --bucket testbucket10 --lifecycle-configuration file://lifecycle.json", "aws s3api get-bucket-lifecycle-configuration --bucket testbucke10 { \"Rules\": [ { \"Expiration\": { \"Days\": 365 }, \"ID\": \"double transition and expiration\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 20, \"StorageClass\": \"cold.test\" }, { \"Days\": 5, \"StorageClass\": \"hot.test\" } ] } ] }", "radosgw-admin bucket list --bucket testbucket10 { \"ETag\": \"\\\"211599863395c832a3dfcba92c6a3b90\\\"\", \"Size\": 540, \"StorageClass\": \"cold.test\", \"Key\": \"obj1\", \"VersionId\": \"W95teRsXPSJI4YWJwwSG30KxSCzSgk-\", \"IsLatest\": true, \"LastModified\": \"2023-11-23T10:38:07.214Z\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } }", "aws --endpoint=http:// RGW_PORT :8080 s3api create-bucket --bucket BUCKET_NAME --object-lock-enabled-for-bucket", "aws --endpoint=http://rgw.ceph.com:8080 s3api create-bucket --bucket worm-bucket --object-lock-enabled-for-bucket", "aws --endpoint=http:// RGW_PORT :8080 s3api put-object-lock-configuration --bucket BUCKET_NAME --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \" RETENTION_MODE \", \"Days\": NUMBER_OF_DAYS }}}'", "aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-lock-configuration --bucket worm-bucket --object-lock-configuration '{ \"ObjectLockEnabled\": \"Enabled\", \"Rule\": { \"DefaultRetention\": { \"Mode\": \"COMPLIANCE\", \"Days\": 10 }}}'", "aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body TEST_FILE", "aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body test.dd { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }", "aws --endpoint=http:// RGW_PORT :8080 s3api put-object --bucket BUCKET_NAME --object-lock-mode RETENTION_MODE --object-lock-retain-until-date \" DATE \" --key compliance-upload --body PATH", "aws --endpoint=http://rgw.ceph.com:8080 s3api put-object --bucket worm-bucket --object-lock-mode COMPLIANCE --object-lock-retain-until-date \"2022-05-31\" --key compliance-upload --body /etc/fstab { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\" }", "aws --endpoint=http://rgw.ceph.com:8080 s3api put-object-legal-hold --bucket worm-bucket --key compliance-upload --legal-hold Status=ON", "aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket", "aws --endpoint=http://rgw.ceph.com:8080 s3api list-objects --bucket worm-bucket { \"Versions\": [ { \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"Size\": 288, \"StorageClass\": \"STANDARD\", \"Key\": \"hosts\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"IsLatest\": true, \"LastModified\": \"2022-06-17T08:51:17.392000+00:00\", \"Owner\": { \"DisplayName\": \"Test User in Tenant test\", \"ID\": \"testUSDtest.user\" } } } ] }", "aws --endpoint=http://rgw.ceph.com:8080 s3api get-object --bucket worm-bucket --key compliance-upload --version-id 'IGOU.vdIs3SPduZglrB-RBaK.sfXpcd' download.1 { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2022-06-17T08:51:17+00:00\", \"ContentLength\": 288, \"ETag\": \"\\\"d560ea5652951637ba9c594d8e6ea8c1\\\"\", \"VersionId\": \"Nhhk5kRS6Yp6dZXVWpZZdRcpSpBKToD\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"ObjectLockMode\": \"COMPLIANCE\", \"ObjectLockRetainUntilDate\": \"2023-06-17T08:51:17+00:00\" }", "radosgw-admin usage show --uid=johndoe --start-date=2022-06-01 --end-date=2022-07-01", "radosgw-admin usage show --show-log-entries=false", "radosgw-admin usage trim --start-date=2022-06-01 --end-date=2022-07-31 radosgw-admin usage trim --uid=johndoe radosgw-admin usage trim --uid=johndoe --end-date=2021-04-31", "radosgw-admin metadata get bucket: BUCKET_NAME radosgw-admin metadata get bucket.instance: BUCKET : BUCKET_ID radosgw-admin metadata get user: USER radosgw-admin metadata set user: USER", "radosgw-admin metadata list radosgw-admin metadata list bucket radosgw-admin metadata list bucket.instance radosgw-admin metadata list user", ".bucket.meta.prodtx:test%25star:default.84099.6 .bucket.meta.testcont:default.4126.1 .bucket.meta.prodtx:testcont:default.84099.4 prodtx/testcont prodtx/test%25star testcont", "prodtxUSDprodt test2.buckets prodtxUSDprodt.buckets test2", "radosgw-admin ratelimit set --ratelimit-scope=user --uid= USER_ID [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]", "radosgw-admin ratelimit set --ratelimit-scope=user --uid=testing --max-read-ops=1024 --max-write-bytes=10240", "radosgw-admin ratelimit get --ratelimit-scope=user --uid= USER_ID", "radosgw-admin ratelimit get --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }", "radosgw-admin ratelimit enable --ratelimit-scope=user --uid= USER_ID", "radosgw-admin ratelimit enable --ratelimit-scope=user --uid=testing { \"user_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }", "radosgw-admin ratelimit disable --ratelimit-scope=user --uid= USER_ID", "radosgw-admin ratelimit disable --ratelimit-scope=user --uid=testing", "radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket= BUCKET_NAME [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]", "radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=mybucket --max-read-ops=1024 --max-write-bytes=10240", "radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket= BUCKET_NAME", "radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": false } }", "radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket= BUCKET_NAME", "radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket=mybucket { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 10240, \"enabled\": true } }", "radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket= BUCKET_NAME", "radosgw-admin ratelimit disable --ratelimit-scope=bucket --bucket=mybucket", "radosgw-admin global ratelimit get", "radosgw-admin global ratelimit get { \"bucket_ratelimit\": { \"max_read_ops\": 1024, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"user_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false }, \"anonymous_ratelimit\": { \"max_read_ops\": 0, \"max_write_ops\": 0, \"max_read_bytes\": 0, \"max_write_bytes\": 0, \"enabled\": false } }", "radosgw-admin global ratelimit set --ratelimit-scope=bucket [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]", "radosgw-admin global ratelimit set --ratelimit-scope bucket --max-read-ops=1024", "radosgw-admin global ratelimit enable --ratelimit-scope=bucket", "radosgw-admin global ratelimit enable --ratelimit-scope bucket", "radosgw-admin global ratelimit set --ratelimit-scope=user [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]", "radosgw-admin global ratelimit set --ratelimit-scope=user --max-read-ops=1024", "radosgw-admin global ratelimit enable --ratelimit-scope=user", "radosgw-admin global ratelimit enable --ratelimit-scope=user", "radosgw-admin global ratelimit set --ratelimit-scope=anonymous [--max-read-ops= NUMBER_OF_OPERATIONS ] [--max-read-bytes= NUMBER_OF_BYTES ] [--max-write-ops= NUMBER_OF_OPERATIONS ] [--max-write-bytes= NUMBER_OF_BYTES ]", "radosgw-admin global ratelimit set --ratelimit-scope=anonymous --max-read-ops=1024", "radosgw-admin global ratelimit enable --ratelimit-scope=anonymous", "radosgw-admin global ratelimit enable --ratelimit-scope=anonymous", "radosgw-admin gc list", "radosgw-admin gc list", "ceph config set client.rgw rgw_gc_max_concurrent_io 20 ceph config set client.rgw rgw_gc_max_trim_chunk 64", "ceph config set client.rgw rgw_lc_max_worker 7", "ceph config set client.rgw rgw_lc_max_wp_worker 7", "radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]", "radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }", "radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3", "radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=CLOUDTIER --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]", "radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= AWS_ENDPOINT_URL , access_key= AWS_ACCESS_KEY ,secret= AWS_SECRET_KEY , target_path=\" TARGET_BUCKET_ON_AWS \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME", "radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class CLOUDTIER --tier-config=endpoint=http://10.0.210.010:8080, access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"CLOUDTIER\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"CLOUDTIER\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"CLOUDTIER\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"http://10.0.210.010:8080\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]", "ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME", "ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'", "s3cmd --configure Enter new values or accept defaults in brackets with Enter. Refer to user manual for detailed description of all options. Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables. Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region [US]: Use \"s3.amazonaws.com\" for S3 Endpoint and not modify it to the target Amazon S3. S3 Endpoint [s3.amazonaws.com]: 10.0.210.78:80 Use \"%(bucket)s.s3.amazonaws.com\" to the target Amazon S3. \"%(bucket)s\" and \"%(location)s\" vars can be used if the target S3 system supports dns based buckets. DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 10.0.210.78:80 Encryption password is used to protect your files from reading by unauthorized persons while in transfer to S3 Encryption password: Path to GPG program [/usr/bin/gpg]: When using secure HTTPS protocol all communication with Amazon S3 servers is protected from 3rd party eavesdropping. This method is slower than plain HTTP, and can only be proxied with Python 2.7 or newer Use HTTPS protocol [Yes]: No On some networks all internet access must go through a HTTP proxy. Try setting it here if you can't connect to S3 directly HTTP Proxy server name: New settings: Access Key: a21e86bce636c3aa2 Secret Key: cf764951f1fdde5f Default Region: US S3 Endpoint: 10.0.210.78:80 DNS-style bucket+hostname:port template for accessing a bucket: 10.0.210.78:80 Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: False HTTP Proxy server name: HTTP Proxy server port: 0 Test access with supplied credentials? [Y/n] Y Please wait, attempting to list all buckets Success. Your access key and secret key worked fine :-) Now verifying that encryption works Not configured. Never mind. Save settings? [y/N] y Configuration saved to '/root/.s3cfg'", "s3cmd mb s3:// NAME_OF_THE_BUCKET_FOR_S3", "s3cmd mb s3://awstestbucket Bucket 's3://awstestbucket/' created", "s3cmd put FILE_NAME s3:// NAME_OF_THE_BUCKET_ON_S3", "s3cmd put test.txt s3://awstestbucket upload: 'test.txt' -> 's3://awstestbucket/test.txt' [1 of 1] 21 of 21 100% in 1s 16.75 B/s done", "<LifecycleConfiguration> <Rule> <ID> RULE_NAME </ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days> DAYS </Days> <StorageClass> STORAGE_CLASS_NAME </StorageClass> </Transition> </Rule> </LifecycleConfiguration>", "cat lc_cloud.xml <LifecycleConfiguration> <Rule> <ID>Archive all objects</ID> <Filter> <Prefix></Prefix> </Filter> <Status>Enabled</Status> <Transition> <Days>2</Days> <StorageClass>CLOUDTIER</StorageClass> </Transition> </Rule> </LifecycleConfiguration>", "s3cmd setlifecycle FILE_NAME s3:// NAME_OF_THE_BUCKET_FOR_S3", "s3cmd setlifecycle lc_config.xml s3://awstestbucket s3://awstestbucket/: Lifecycle Policy updated", "cephadm shell", "ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME", "ceph orch restart rgw.rgw.1 Scheduled to restart rgw.rgw.1.host03.vkfldf on host 'host03'", "radosgw-admin lc list [ { \"bucket\": \":awstestbucket:552a3adb-39e0-40f6-8c84-00590ed70097.54639.1\", \"started\": \"Mon, 26 Sep 2022 18:32:07 GMT\", \"status\": \"COMPLETE\" } ]", "[root@client ~]USD radosgw-admin bucket list [ \"awstestbucket\" ]", "[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2022-08-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }", "s3cmd ls s3://awstestbucket 2022-08-25 09:57 0 s3://awstestbucket/test.txt", "s3cmd info s3://awstestbucket/test.txt s3://awstestbucket/test.txt (object): File size: 0 Last mod: Mon, 03 Aug 2022 09:57:49 GMT MIME type: text/plain Storage: CLOUDTIER MD5 sum: 991d2528bb41bb839d1a9ed74b710794 SSE: none Policy: none CORS: none ACL: test-user: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1664790668/ctime:1664790668/gid:0/gname:root/md5:991d2528bb41bb839d1a9ed74b710794/mode:33188/mtime:1664790668/uid:0/uname:root", "[client@client01 ~]USD aws configure AWS Access Key ID [****************6VVP]: AWS Secret Access Key [****************pXqy]: Default region name [us-east-1]: Default output format [json]:", "[client@client01 ~]USD aws s3 ls s3://dfqe-bucket-01/awstest PRE awstestbucket/", "[client@client01 ~]USD aws s3 cp s3://dfqe-bucket-01/awstestbucket/test.txt . download: s3://dfqe-bucket-01/awstestbucket/test.txt to ./test.txt", "radosgw-admin user create --uid= USER_NAME --display-name=\" DISPLAY_NAME \" [--access-key ACCESS_KEY --secret-key SECRET_KEY ]", "radosgw-admin user create --uid=test-user --display-name=\"test-user\" --access-key a21e86bce636c3aa1 --secret-key cf764951f1fdde5e { \"user_id\": \"test-user\", \"display_name\": \"test-user\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"subusers\": [], \"keys\": [ { \"user\": \"test-user\", \"access_key\": \"a21e86bce636c3aa1\", \"secret_key\": \"cf764951f1fdde5e\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"default_storage_class\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\", \"mfa_ids\": [] }", "aws s3 --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default mb s3:// BUCKET_NAME", "[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default mb s3://transition", "radosgw-admin bucket stats --bucket transition { \"bucket\": \"transition\", \"num_shards\": 11, \"tenant\": \"\", \"zonegroup\": \"b29b0e50-1301-4330-99fc-5cdcfc349acf\", \"placement_rule\": \"default-placement\", \"explicit_placement\": { \"data_pool\": \"\", \"data_extra_pool\": \"\", \"index_pool\": \"\" },", "[root@host01 ~]USD oc project openshift-storage [root@host01 ~]USD oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.6 True False 4d1h Cluster version is 4.11.6 [root@host01 ~]USD oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 4d Ready 2023-06-27T15:23:01Z 4.11.0", "noobaa namespacestore create azure-blob az --account-key=' ACCOUNT_KEY ' --account-name=' ACCOUNT_NAME' --target-blob-container='_AZURE_CONTAINER_NAME '", "[root@host01 ~]USD noobaa namespacestore create azure-blob az --account-key='iq3+6hRtt9bQ46QfHKQ0nSm2aP+tyMzdn8dBSRW4XWrFhY+1nwfqEj4hk2q66nmD85E/o5OrrUqo+AStkKwm9w==' --account-name='transitionrgw' --target-blob-container='mcgnamespace'", "[root@host01 ~]USD noobaa bucketclass create namespace-bucketclass single aznamespace-bucket-class --resource az -n openshift-storage", "noobaa obc create OBC_NAME --bucketclass aznamespace-bucket-class -n openshift-storage", "[root@host01 ~]USD noobaa obc create rgwobc --bucketclass aznamespace-bucket-class -n openshift-storage", "radosgw-admin zonegroup placement add --rgw-zonegroup = ZONE_GROUP_NAME --placement-id= PLACEMENT_ID --storage-class = STORAGE_CLASS_NAME --tier-type=cloud-s3", "radosgw-admin zonegroup placement add --rgw-zonegroup=default --placement-id=default-placement --storage-class=AZURE --tier-type=cloud-s3 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"false\", \"s3\": { \"endpoint\": \"\", \"access_key\": \"\", \"secret\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"\", \"acl_mappings\": [], \"multipart_sync_threshold\": 33554432, \"multipart_min_part_size\": 33554432 } } } ] } } ]", "radosgw-admin zonegroup placement modify --rgw-zonegroup ZONE_GROUP_NAME --placement-id PLACEMENT_ID --storage-class STORAGE_CLASS_NAME --tier-config=endpoint= ENDPOINT_URL , access_key= ACCESS_KEY ,secret= SECRET_KEY , target_path=\" TARGET_BUCKET_ON \", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region= REGION_NAME", "radosgw-admin zonegroup placement modify --rgw-zonegroup default --placement-id default-placement --storage-class AZURE --tier-config=endpoint=\"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", access_key=a21e86bce636c3aa2,secret=cf764951f1fdde5f, target_path=\"dfqe-bucket-01\", multipart_sync_threshold=44432, multipart_min_part_size=44432, retain_head_object=true region=us-east-1 [ { \"key\": \"default-placement\", \"val\": { \"name\": \"default-placement\", \"tags\": [], \"storage_classes\": [ \"AZURE\", \"STANDARD\", \"cold.test\", \"hot.test\" ], \"tier_targets\": [ { \"key\": \"AZURE\", \"val\": { \"tier_type\": \"cloud-s3\", \"storage_class\": \"AZURE\", \"retain_head_object\": \"true\", \"s3\": { \"endpoint\": \"https://s3-openshift-storage.apps.ocp410.0e73azopenshift.com\", \"access_key\": \"a21e86bce636c3aa2\", \"secret\": \"cf764951f1fdde5f\", \"region\": \"\", \"host_style\": \"path\", \"target_storage_class\": \"\", \"target_path\": \"dfqe-bucket-01\", \"acl_mappings\": [], \"multipart_sync_threshold\": 44432, \"multipart_min_part_size\": 44432 } } } ] } } ] ]", "ceph orch restart CEPH_OBJECT_GATEWAY_SERVICE_NAME", "ceph orch restart client.rgw.objectgwhttps.host02.udyllp Scheduled to restart client.rgw.objectgwhttps.host02.udyllp on host 'host02", "cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \" STORAGE_CLASS \" } ], \"ID\": \" TRANSITION_ID \" } ] }", "[root@host01 ~]USD cat transition.json { \"Rules\": [ { \"Filter\": { \"Prefix\": \"\" }, \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ], \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\" } ] }", "aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default put-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME", "[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default put-bucket-lifecycle-configuration --lifecycle-configuration file://transition.json --bucket transition", "aws s3api --ca-bundle CA_PERMISSION --profile rgw --endpoint ENDPOINT_URL --region default get-bucket-lifecycle-configuration --lifecycle-configuration file:// BUCKET .json --bucket BUCKET_NAME", "[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default get-bucket-lifecycle-configuration --bucket transition { \"Rules\": [ { \"ID\": \"Transition Objects in bucket to AZURE Blob after 30 days\", \"Prefix\": \"\", \"Status\": \"Enabled\", \"Transitions\": [ { \"Days\": 30, \"StorageClass\": \"AZURE\" } ] } ] }", "radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1\", \"started\": \"Thu, 01 Jan 1970 00:00:00 GMT\", \"status\": \"UNINITIAL\" } ]", "cephadm shell", "ceph orch daemon CEPH_OBJECT_GATEWAY_DAEMON_NAME", "ceph orch daemon restart rgw.objectgwhttps.host02.udyllp ceph orch daemon restart rgw.objectgw.host02.afwvyq ceph orch daemon restart rgw.objectgw.host05.ucpsrr", "for i in 1 2 3 4 5 do aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default cp /etc/hosts s3://transition/transitionUSDi done", "aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 10:24:01 3847 transition1 2023-06-30 10:24:04 3847 transition2 2023-06-30 10:24:07 3847 transition3 2023-06-30 10:24:09 3847 transition4 2023-06-30 10:24:13 3847 transition5", "rados ls -p default.rgw.buckets.data | grep transition d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition1 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition4 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition2 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition3 d9c4f708-5598-4c44-9d36-849552a08c4d.169377.1_transition5", "radosgw-admin lc process", "radosgw-admin lc list [ { \"bucket\": \":transition:d9c4f708-5598-4c44-9d36-849552a08c4d.170017.5\", \"started\": \"Mon, 30 Jun 2023-06-30 16:52:56 GMT\", \"status\": \"COMPLETE\" } ]", "[root@host01 ~]USD aws s3api list-objects --bucket awstestbucket --endpoint=http://10.0.209.002:8080 { \"Contents\": [ { \"Key\": \"awstestbucket/test\", \"LastModified\": \"2023-06-25T16:14:23.118Z\", \"ETag\": \"\\\"378c905939cc4459d249662dfae9fd6f\\\"\", \"Size\": 29, \"StorageClass\": \"STANDARD\", \"Owner\": { \"DisplayName\": \"test-user\", \"ID\": \"test-user\" } } ] }", "[root@host01 ~]USD aws s3 --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default ls s3://transition 2023-06-30 17:52:56 0 transition1 2023-06-30 17:51:59 0 transition2 2023-06-30 17:51:59 0 transition3 2023-06-30 17:51:58 0 transition4 2023-06-30 17:51:59 0 transition5", "[root@host01 ~]USD aws s3api --ca-bundle /etc/pki/ca-trust/source/anchors/myCA.pem --profile rgw --endpoint https://host02.example.com:8043 --region default head-object --key transition1 --bucket transition { \"AcceptRanges\": \"bytes\", \"LastModified\": \"2023-06-31T16:52:56+00:00\", \"ContentLength\": 0, \"ETag\": \"\\\"46ecb42fd0def0e42f85922d62d06766\\\"\", \"ContentType\": \"binary/octet-stream\", \"Metadata\": {}, \"StorageClass\": \"CLOUDTIER\" }", "radosgw-admin account create [--account-name={name}] [--account-id={id}] [--email={email}]", "radosgw-admin account create --account-name=user1 --account-id=12345 [email protected]", "radosgw-admin user create --uid={userid} --display-name={name} --account-id={accountid} --account-root --gen-secret --gen-access-key", "radosgw-admin user create --uid=rootuser1 --display-name=\"Root User One\" --account-id=account123 --account-root --gen-secret --gen-access-key", "radosgw-admin account rm --account-id={accountid}", "radosgw-admin account rm --account-id=account123", "radosgw-admin account stats --account-id={accountid} --sync-stats", "{ \"account\": \"account123\", \"data_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000, # Total number of objects \"num_buckets\": 5, # Total number of buckets \"usage\": { \"total_size\": 3145728000, # Total size in bytes (3 GB) \"num_objects\": 12000 } }", "radosgw-admin quota set --quota-scope=account --account-id={accountid} --max-size=10G radosgw-admin quota enable --quota-scope=account --account-id={accountid}", "{ \"status\": \"OK\", \"message\": \"Quota enabled for account account123\" }", "radosgw-admin quota set --quota-scope=bucket --account-id={accountid} --max-objects=1000000 radosgw-admin quota enable --quota-scope=bucket --account-id={accountid}", "{ \"status\": \"OK\", \"message\": \"Quota enabled for bucket in account account123\" }", "radosgw-admin quota set --quota-scope=account --account-id RGW12345678901234568 --max-buckets 10000 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin quota enable --quota-scope=account --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } radosgw-admin account get --account-id RGW12345678901234568 { \"id\": \"RGW12345678901234568\", \"tenant\": \"tenant1\", \"name\": \"account1\", \"email\": \"tenataccount1\", \"quota\": { \"enabled\": true, \"check_on_raw\": false, \"max_size\": 10737418240, \"max_size_kb\": 10485760, \"max_objects\": 100 }, \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"max_users\": 1000, \"max_roles\": 1000, \"max_groups\": 1000, \"max_buckets\": 1000, \"max_access_keys\": 4 } ceph versions { \"mon\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"mgr\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"osd\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 9 }, \"rgw\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 3 }, \"overall\": { \"ceph version 19.1.1-63.el9cp (8fa7b56d5e9f208c4233b0a8273665087bded8ae) squid (rc)\": 18 } }", "radosgw-admin user modify --uid={userid} --account-id={accountid}", "{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default::topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}", "{\"TopicConfigurations\": [{ \"Id\": \"ID1\", \"TopicArn\": \"arn:aws:sns:default:RGW00000000000000001:topic1\", \"Events\": [\"s3:ObjectCreated:*\"]}]}", "radosgw-admin topic rm --topic topic1", "radosgw-admin user modify --uid <user_ID> --account-id <Account_ID> --account-root", "radosgw-admin user policy attach --uid <user_ID> --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess", "radosgw-admin user modify --uid <user_ID> --account-root=0", "radosgw-admin user create --uid= name --display-name=\" USER_NAME \"", "radosgw-admin user create --uid=\"testuser\" --display-name=\"Jane Doe\" { \"user_id\": \"testuser\", \"display_name\": \"Jane Doe\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"CEP28KDIQXBKU4M15PDC\", \"secret_key\": \"MARoio8HFc8JxhEilES3dKFVj8tV3NOOYymihTLO\" } ], \"swift_keys\": [], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin subuser create --uid= NAME --subuser= NAME :swift --access=full", "radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"13TLtdEW7bCqgttQgPzxFxziu0AgabtOc6vM8DLA\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "radosgw-admin key create --subuser= NAME :swift --key-type=swift --gen-secret", "radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret { \"user_id\": \"testuser\", \"display_name\": \"First User\", \"email\": \"\", \"suspended\": 0, \"max_buckets\": 1000, \"auid\": 0, \"subusers\": [ { \"id\": \"testuser:swift\", \"permissions\": \"full-control\" } ], \"keys\": [ { \"user\": \"testuser\", \"access_key\": \"O8JDE41XMI74O185EHKD\", \"secret_key\": \"i4Au2yxG5wtr1JK01mI8kjJPM93HNAoVWOSTdJd6\" } ], \"swift_keys\": [ { \"user\": \"testuser:swift\", \"secret_key\": \"a4ioT4jEP653CDcdU8p4OuhruwABBRZmyNUbnSSt\" } ], \"caps\": [], \"op_mask\": \"read, write, delete\", \"default_placement\": \"\", \"placement_tags\": [], \"bucket_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"check_on_raw\": false, \"max_size\": -1, \"max_size_kb\": 0, \"max_objects\": -1 }, \"temp_url_keys\": [], \"type\": \"rgw\" }", "subscription-manager repos --enable=rhel-9-for-x86_64-highavailability-rpms", "dnf install python3-boto3", "vi s3test.py", "import boto3 endpoint = \"\" # enter the endpoint URL along with the port \"http:// URL : PORT \" access_key = ' ACCESS ' secret_key = ' SECRET ' s3 = boto3.client( 's3', endpoint_url=endpoint, aws_access_key_id=access_key, aws_secret_access_key=secret_key ) s3.create_bucket(Bucket='my-new-bucket') response = s3.list_buckets() for bucket in response['Buckets']: print(\"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'] ))", "python3 s3test.py", "my-new-bucket 2022-05-31T17:09:10.000Z", "sudo yum install python-setuptools sudo easy_install pip sudo pip install --upgrade setuptools sudo pip install --upgrade python-swiftclient", "swift -A http:// IP_ADDRESS : PORT /auth/1.0 -U testuser:swift -K ' SWIFT_SECRET_KEY ' list", "swift -A http://10.10.143.116:80/auth/1.0 -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list", "my-new-bucket" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/object_gateway_guide/index
Chapter 2. Product features
Chapter 2. Product features Red Hat OpenShift AI provides several features for data scientists and IT operations administrators. 2.1. Features for data scientists Containers While tools such as JupyterLab already offer intuitive ways for data scientists to develop models on their machines, there are always inherent complexities involved with collaboration and sharing work. Moreover, using specialized hardware such as powerful GPUs can be very expensive when you have to buy and maintain your own. The Jupyter environment that is included with OpenShift AI lets you take your development environment anywhere you need it to be. Because all of the workloads are run as containers, collaboration is as easy as sharing an image with your team members, or even simply adding it to the list of default containers that they can use. As a result, GPUs and large amounts of memory are significantly more accessible, since you are no longer limited by what your laptop can support. Integration with third-party machine learning tools We have all run into situations where our favorite tools or services do not play well with one another. OpenShift AI is designed with flexibility in mind. You can use a wide range of open source and third-party tools with OpenShift AI. These tools support the complete machine learning lifecycle, from data engineering and feature extraction to model deployment and management. Collaboration on notebooks with Git Use Jupyter's Git interface to work collaboratively with others, and keep good track of the changes to your code. Securely built notebook images Choose from a default set of notebook images that are pre-configured with the tools and libraries that you need for model development. Software stacks, especially those involved in machine learning, tend to be complex systems. There are many modules and libraries in the Python ecosystem that can be used, so determining which versions of what libraries to use can be very challenging. OpenShift AI includes many packaged notebook images that have been built with insight from data scientists and recommendation engines. You can start new projects quickly on the right foot without worrying about downloading unproven and possibly insecure images from random upstream repositories. Custom workbench images In addition to workbench images provided and supported by Red Hat and independent software vendors (ISVs), you can configure custom workbench images that cater to your project's specific requirements. Data science pipelines OpenShift AI supports data science pipelines 2.0, for an efficient way of running your data science workloads. You can standardize and automate machine learning workflows that enable you to develop and deploy your data science models. Model serving As a data scientist, you can deploy your trained machine-learning models to serve intelligent applications in production. Deploying or serving a model makes the model's functions available as a service endpoint that can be used for testing or integration into applications. You have much control over how this serving is performed. Optimize your data science models with accelerators If you work with large data sets, you can optimize the performance of your data science models in OpenShift AI with NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators. Accelerators enable you to scale your work, reduce latency, and increase productivity. 2.2. Features for IT Operations administrators Manage users with an identity provider OpenShift AI supports the same authentication systems as your OpenShift cluster. By default, OpenShift AI is accessible to all users listed in your identity provider and those users do not need a separate set of credentials to access OpenShift AI. Optionally, you can limit the set of users who have access by creating an OpenShift group that specifies a subset of users. You can also create an OpenShift group that identifies the list of users who have administrator access to OpenShift AI. Manage resources with OpenShift Use your existing OpenShift knowledge to configure and manage resources for your OpenShift AI users. Control Red Hat usage data collection Choose whether to allow Red Hat to collect data about OpenShift AI usage in your cluster. Usage data collection is enabled by default when you install OpenShift AI on your OpenShift cluster. Apply autoscaling to your cluster to reduce usage costs Use the cluster autoscaler to adjust the size of your cluster to meet its current needs and optimize costs. Manage resource usage by stopping idle notebooks Reduce resource usage in your OpenShift AI deployment by automatically stopping notebook servers that have been idle for a period of time. Implement model-serving runtimes OpenShift AI provides support for model-serving runtimes. A model-serving runtime provides integration with a specified model server and the model frameworks that it supports. By default, OpenShift AI includes the OpenVINO Model Server runtime. However, if this runtime doesn't meet your needs (for example, if it doesn't support a particular model framework), you can add your own custom runtimes. Install in a disconnected environment OpenShift AI Self-Managed supports installation in a disconnected environment. Disconnected clusters are on a restricted network, typically behind a firewall and unable to reach the Internet. In this case, clusters cannot access the remote registries where Red Hat provided OperatorHub sources reside. In this case, you deploy the OpenShift AI Operator to a disconnected environment by using a private registry in which you have mirrored (copied) the relevant images. Manage accelerators Enable NVIDIA graphics processing units (GPUs) or Intel Gaudi AI accelerators in OpenShift AI and allow your data scientists to use compute-heavy workloads.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/introduction_to_red_hat_openshift_ai/product-features_intro
31.3. Data Efficiency Testing Procedures
31.3. Data Efficiency Testing Procedures Successful validation of VDO is dependent upon following a well-structured test procedure. This section provides a series of steps to follow, along with the expected results, as examples of tests to consider when participating in an evaluation. Test Environment The test cases in the section make the following assumptions about the test environment: One or more Linux physical block devices are available. The target block device (for example, /dev/sdb ) is larger than 512 GB. Flexible I/O Tester ( fio ) version 2.1.1 or later is installed. VDO is installed. The following information should be recorded at the start of each test in order to ensure that the test environment is fully understood: The Linux build used, including the kernel build number. A complete list of installed packages, as obtained from the rpm -qa command. Complete system specifications: CPU type and quantity (available in /proc/cpuinfo ). Installed memory and the amount available after the base OS is running (available in /proc/meminfo ). Type(s) of drive controller(s) used. Type(s) and quantity of disk(s) used. A complete list of running processes (from ps aux or a similar listing). Name of the Physical Volume and the Volume Group created for use with VDO ( pvs and vgs listings). File system used when formatting the VDO volume (if any). Permissions on the mounted directory. Contents of /etc/vdoconf.yaml . Location of the VDO files. You can capture much of the required information by running sosreport . Workloads Effectively testing VDO requires the use of data sets that simulate real world workloads. The data sets should provide a balance between data that can be deduplicated and/or compressed and data that cannot in order to demonstrate performance under different conditions. There are several tools that can synthetically generate data with repeatable characteristics. Two utilities in particular, VDbench and fio , are recommended for use during testing. This guide uses fio . Understanding the arguments is critical to a successful evaluation: Table 31.1. fio Options Argument Description Value --size The quantity of data fio will send to the target per job (see numjobs below). 100 GB --bs The block size of each read/write request produced by fio. Red Hat recommends a 4 KB block size to match VDO's 4 KB default 4k --numjobs The number of jobs that fio will create to run the benchmark. Each job sends the amount of data specified by the --size parameter. The first job sends data to the device at the offset specified by the --offset parameter. Subsequent jobs write the same region of the disk (overwriting) unless the -\ufeff-\ufeffoffset_increment parameter is provided, which will offset each job from where the job began by that value. To achieve peak performance on flash at least two jobs are recommended. One job is typically enough to saturate rotational disk (HDD) throughput. 1 (HDD) 2 (SSD) --thread Instructs fio jobs to be run in threads rather than being forked, which may provide better performance by limiting context switching. <N/A> --ioengine There are several I/O engines available in Linux that are able to be tested using fio. Red Hat testing uses the asynchronous unbuffered engine ( libaio ). If you are interested in another engine, discuss that with your Red Hat Sales Engineer. The Linux libaio engine is used to evaluate workloads in which one or more processes are making random requests simultaneously. libaio allows multiple requests to be made asynchronously from a single thread before any data has been retrieved, which limits the number of context switches that would be required if the requests were provided by manythreads via a synchronous engine. libaio --direct When set, direct allows requests to be submitted to the device bypassing the Linux Kernel's page cache. Libaio Engine: libaio must be used with direct enabled (=1) or the kernel may resort to the sync API for all I\ufeff/\ufeffO requests. 1 (libaio) --iodepth The number of I\ufeff/\ufeffO buffers in flight at any time. A high iodepth will usually increase performance, particularly for random reads or writes. High depths ensure that the controller always has requests to batch. However, setting iodepth too high (greater than 1K, typically) may cause undesirable latency. While Red Hat recommends an iodepth between 128 and 512, the final value is a trade-off and depends on how your application tolerates latency. 128 (minimum) --iodepth_batch_submit The number of I/Os to create when the iodepth buffer pool begins to empty. This parameter limits task switching from I\ufeff/\ufeffO to buffer creation during the test. 16 --iodepth_batch_complete The number of I/Os to complete before submitting a batch ( iodepth_batch_complete ). This parameter limits task switching from I\ufeff/\ufeffO to buffer creation during the test. 16 --gtod_reduce Disables time-of-day calls to calculate latency. This setting will lower throughput if enabled (=0), so it should be enabled (=1) unless latency measurement is necessary. 1 31.3.1. Configuring a VDO Test Volume 1. Create a VDO Volume with a Logical Size of 1 TB on a 512 GB Physical Volume Create a VDO volume. To test the VDO async mode on top of synchronous storage, create an asynchronous volume using the --writePolicy=async option: To test the VDO sync mode on top of synchronous storage, create a synchronous volume using the --writePolicy=sync option: Format the new device with an XFS or ext4 file system. For XFS: For ext4: Mount the formatted device: 31.3.2. Testing VDO Efficiency 2. Test Reading and Writing to the VDO Volume Write 32 GB of random data to the VDO volume: Read the data from the VDO volume and write it to another location not on the VDO volume: Compare the two files using diff , which should report that the files are the same: Copy the file to a second location on the VDO volume: Compare the third file to the second file. This should report that the files are the same: 3. Remove the VDO Volume Unmount the file system created on the VDO volume: Run the command to remove the VDO volume vdo0 from the system: Verify that the volume has been removed. There should be no listing in vdo list for the VDO partition: 4. Measure Deduplication Create and mount a VDO volume following Section 31.3.1, "Configuring a VDO Test Volume" . Create 10 directories on the VDO volume named vdo1 through vdo10 to hold 10 copies of the test data set: Examine the amount of disk space used according to the file system: Consider tabulating the results in a table: Statistic Bare File System After Seed After 10 Copies File System Used Size 198 MB VDO Data Used VDO Logical Used Run the following command and record the values. "Data blocks used" is the number of blocks used by user data on the physical device running under VDO. "Logical blocks used" is the number of blocks used before optimization. It will be used as the starting point for measurements Create a data source file in the top level of the VDO volume Re-examine the amount of used physical disk space in use. This should show an increase in the number of blocks used corresponding to the file just written: Copy the file to each of the 10 subdirectories: Once again, check the amount of physical disk space used (data blocks used). This number should be similar to the result of step 6 above, with only a slight increase due to file system journaling and metadata: Subtract this new value of the space used by the file system from the value found before writing the test data. This is the amount of space consumed by this test from the file system's perspective. Observe the space savings in your recorded statistics: Note: In the following table, values have been converted to MB/GB. vdostats "blocks" are 4,096 B. Statistic Bare File System After Seed After 10 Copies File System Used Size 198 MB 4.2 GB 45 GB VDO Data Used 4 MB 4.1 GB 4.1 GB VDO Logical Used 23.6 GB* 27.8 GB 68.7 GB * File system overhead for 1.6 TB formatted drive 5. Measure Compression Create a VDO volume of at least 10 GB of physical and logical size. Add options to disable deduplication and enable compression: Inspect VDO statistics before transfer; make note of data blocks used and logical blocks used (both should be zero): Format the new device with an XFS or ext4 file system. For XFS: For ext4: Mount the formatted device: Synchronize the VDO volume to complete any unfinished compression: Inspect VDO statistics again. Logical blocks used - data blocks used is the number of 4 KB blocks saved by compression for the file system alone. VDO optimizes file system overhead as well as actual user data: Copy the contents of /lib to the VDO volume. Record the total size: Synchronize Linux caches and the VDO volume: Inspect VDO statistics once again. Observe the logical and data blocks used: Logical blocks used - data blocks used represents the amount of space used (in units of 4 KB blocks) for the copy of your /lib files. The total size (from the table in the section called "4. Measure Deduplication" ) - (logical blocks used-data blocks used * 4096) = bytes saved by compression. Remove the VDO volume: 6. Test VDO Compression Efficiency Create and mount a VDO volume following Section 31.3.1, "Configuring a VDO Test Volume" . Repeat the experiments in the section called "4. Measure Deduplication" and the section called "5. Measure Compression" without removing the volume. Observe changes to space savings in vdostats . Experiment with your own datasets. 7. Understanding TRIM and DISCARD Thin provisioning allows a logical or virtual storage space to be larger than the underlying physical storage. Applications such as file systems benefit from running on the larger virtual layer of storage, and data-efficiency techniques such as data deduplication reduce the number of physical data blocks needed to store all of the data. To benefit from these storage savings, the physical storage layer needs to know when application data has been deleted. Traditional file systems did not have to inform the underlying storage when data was deleted. File systems that work with thin provisioned storage send TRIM or DISCARD commands to inform the storage system when a logical block is no longer required. These commands can be sent whenever a block is deleted using the discard mount option, or these commands can be sent in a controlled manner by running utilities such as fstrim that tell the file system to detect which logical blocks are unused and send the information to the storage system in the form of a TRIM or DISCARD command. Important For more information on how thin provisioning works, see Thinly-Provisioned Logical Volumes (Thin Volumes) in the Red Hat Enterprise Linux 7 Logical Volume Manager Administration Guide . To see how this works: Create and mount a new VDO logical volume following Section 31.3.1, "Configuring a VDO Test Volume" . Trim the file system to remove any unneeded blocks (this may take a long time): Record the initial state in following table below by entering: to see how much capacity is used in the file system, and run vdostats to see how many physical and logical data blocks are being used. Create a 1 GB file with non-duplicate data in the file system running on top of VDO: and then collect the same data. The file system should have used an additional 1 GB, and the data blocks used and logical blocks used have increased similarly. Run fstrim /mnt/VDOVolume and confirm that this has no impact after creating a new file. Delete the 1 GB file: Check and record the parameters. The file system is aware that a file has been deleted, but there has been no change to the number of physical or logical blocks because the file deletion has not been communicated to the underlying storage. Run fstrim /mnt/VDOVolume and record the same parameters. fstrim looks for free blocks in the file system and sends a TRIM command to the VDO volume for unused addresses, which releases the associated logical blocks, and VDO processes the TRIM to release the underlying physical blocks. Step File Space Used (MB) Data Blocks Used Logical Blocks Used Initial Add 1 GB File Run fstrim Delete 1 GB File Run fstrim From this exercise, the TRIM process is needed so the underlying storage can have an accurate knowledge of capacity utilization. fstrim is a command line tool that analyzes many blocks at once for greater efficiency. An alternative method is to use the file system discard option when mounting. The discard option will update the underlying storage after each file system block is deleted, which can slow throughput but provides for great utilization awareness. It is also important to understand that the need to TRIM or DISCARD unused blocks is not unique to VDO; any thin-provisioned storage system has the same challenge
[ "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize=1T --writePolicy=async --verbose", "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize=1T --writePolicy=sync --verbose", "mkfs.xfs -K /dev/mapper/vdo0", "mkfs.ext4 -E nodiscard /dev/mapper/vdo0", "mkdir /mnt/VDOVolume mount /dev/mapper/vdo0 /mnt/VDOVolume && chmod a+rwx /mnt/VDOVolume", "dd if=/dev/urandom of=/mnt/VDOVolume/testfile bs=4096 count=8388608", "dd if=/mnt/VDOVolume/testfile of=/home/user/testfile bs=4096", "diff -s /mnt/VDOVolume/testfile /home/user/testfile", "dd if=/home/user/testfile of=/mnt/VDOVolume/testfile2 bs=4096", "diff -s /mnt/VDOVolume/testfile2 /home/user/testfile", "umount /mnt/VDOVolume", "vdo remove --name=vdo0", "vdo list --all | grep vdo", "mkdir /mnt/VDOVolume/vdo{01..10}", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 198M 1.4T 1% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1090 overhead blocks used : 538846 logical blocks used : 6059434", "dd if=/dev/urandom of=/mnt/VDOVolume/sourcefile bs=4096 count=1048576 4294967296 bytes (4.3 GB) copied, 540.538 s, 7.9 MB/s", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 4.2G 1.4T 1% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1050093 (increased by 4GB) overhead blocks used : 538846 (Did not change) logical blocks used : 7108036 (increased by 4GB)", "for i in {01..10}; do cp /mnt/VDOVolume/sourcefile /mnt/VDOVolume/vdoUSDi done", "df -h /mnt/VDOVolume Filesystem Size Used Avail Use% Mounted on /dev/mapper/vdo0 1.5T 45G 1.3T 4% /mnt/VDOVolume", "vdostats --verbose | grep \"blocks used\" data blocks used : 1050836 (increased by 3M) overhead blocks used : 538846 logical blocks used : 17594127 (increased by 41G)", "vdo create --name=vdo0 --device= /dev/sdb --vdoLogicalSize= 10G --verbose --deduplication=disabled --compression=enabled", "vdostats --verbose | grep \"blocks used\"", "mkfs.xfs -K /dev/mapper/vdo0", "mkfs.ext4 -E nodiscard /dev/mapper/vdo0", "mkdir /mnt/VDOVolume mount /dev/mapper/vdo0 /mnt/VDOVolume && chmod a+rwx /mnt/VDOVolume", "sync && dmsetup message vdo0 0 sync-dedupe", "vdostats --verbose | grep \"blocks used\"", "cp -vR /lib /mnt/VDOVolume sent 152508960 bytes received 60448 bytes 61027763.20 bytes/sec total size is 152293104 speedup is 1.00", "sync && dmsetup message vdo0 0 sync-dedupe", "vdostats --verbose | grep \"blocks used\"", "umount /mnt/VDOVolume && vdo remove --name=vdo0", "fstrim /mnt/VDOVolume", "df -m /mnt/VDOVolume", "dd if=/dev/urandom of=/mnt/VDOVolume/file bs=1M count=1K", "rm /mnt/VDOVolume/file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-ev-data-testing
Chapter 2. Metrics solution components
Chapter 2. Metrics solution components Red Hat recommends using the Performance Co-Pilot to collect and archive Satellite metrics. Performance Co-Pilot (PCP) Performance Co-Pilot is a suite of tools and libraries for acquiring, storing, and analyzing system-level performance measurements. You can use PCP to analyze live and historical metrics in the CLI. Performance Metric Domain Agents (PMDA) A Performance Metric Domain Agent is a PCP add-on which enables access to metrics of an application or service. To gather all metrics relevant to Satellite, you have to install PMDA for Apache HTTP Server and PostgreSQL. Grafana A web application that visualizes metrics collected by PCP. To analyze metrics in the web UI, you have to install Grafana and the Grafana PCP plugin.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/monitoring_satellite_performance/metrics-solution-components_monitoring
17.4. xinetd Configuration Files
17.4. xinetd Configuration Files The configuration files for xinetd are as follows: /etc/xinetd.conf - The global xinetd configuration file. /etc/xinetd.d/ - The directory containing all service-specific files. 17.4.1. The /etc/xinetd.conf File The /etc/xinetd.conf file contains general configuration settings which effect every service under xinetd 's control. It is read once when the xinetd service is started, so for configuration changes to take effect, the administrator must restart the xinetd service. Below is a sample /etc/xinetd.conf file: These lines control the following aspects of xinetd : instances - Sets the maximum number of requests xinetd can handle at once. log_type - Configures xinetd to use the authpriv log facility, which writes log entries to the /var/log/secure file. Adding a directive such as FILE /var/log/xinetdlog would create a custom log file called xinetdlog in the /var/log/ directory. log_on_success - Configures xinetd to log if the connection is successful. By default, the remote host's IP address and the process ID of server processing the request are recorded. log_on_failure - Configures xinetd to log if there is a connection failure or if the connection is not allowed. cps - Configures xinetd to allow no more than 25 connections per second to any given service. If this limit is reached, the service is retired for 30 seconds. includedir /etc/xinetd.d/ - Includes options declared in the service-specific configuration files located in the /etc/xinetd.d/ directory. Refer to Section 17.4.2, "The /etc/xinetd.d/ Directory" for more information. Note Often, both the log_on_success and log_on_failure settings in /etc/xinetd.conf are further modified in the service-specific log files. For this reason, more information may appear in a given service's log than the /etc/xinetd.conf file may indicate. Refer to Section 17.4.3.1, "Logging Options" for additional information.
[ "defaults { instances = 60 log_type = SYSLOG authpriv log_on_success = HOST PID log_on_failure = HOST cps = 25 30 } includedir /etc/xinetd.d" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-tcpwrappers-xinetd-config
D.3. Controlling Activation with Tags
D.3. Controlling Activation with Tags You can specify in the configuration file that only certain logical volumes should be activated on that host. For example, the following entry acts as a filter for activation requests (such as vgchange -ay ) and only activates vg1/lvol0 and any logical volumes or volume groups with the database tag in the metadata on that host. There is a special match "@*" that causes a match only if any metadata tag matches any host tag on that machine. As another example, consider a situation where every machine in the cluster has the following entry in the configuration file: If you want to activate vg1/lvol2 only on host db2 , do the following: Run lvchange --addtag @db2 vg1/lvol2 from any host in the cluster. Run lvchange -ay vg1/lvol2 . This solution involves storing hostnames inside the volume group metadata.
[ "activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }", "tags { hosttags = 1 }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/tag_activation
6.2. VDB Definition: The VDB Element
6.2. VDB Definition: The VDB Element Attributes name The name of the VDB. The VDB name referenced through the driver or datasource during the connection time. version The version of the VDB (should be an positive integer). This determines the deployed directory location (see Name), and provides an explicit versioning mechanism to the VDB name. Property Elements cache-metadata Can be "true" or "false". If "false", JBoss Data Virtualization will obtain metadata once for every launch of the VDB. "true" will save a file containing the metadata into the EAP_HOME / MODE /data directory. Defaults to "false" for -vdb.xml deployments otherwise "true". query-timeout Sets the default query timeout in milliseconds for queries executed against this VDB. 0 indicates that the server default query timeout should be used. Defaults to 0. Will have no effect if the server default query timeout is set to a lesser value. Note that clients can still set their own timeouts that will be managed on the client side. lib Set to a list of modules for the VDB classpath for user defined function loading. See also Support for Non-Pushdown User Defined Functions in Red Hat JBoss Data Virtualization Development Guide: Server Development . security-domain Set to the security domain to use if a specific security domain is applicable to the VDB. Otherwise the security domain list from the transport will be used. Important An administrator needs to configure a matching "custom-security" login module in the standalone.xml configuration file before the VDB is deployed. connection.XXX This is for use by the ODBC transport and OData. They use it to set the default connection/execution properties. Note that the properties are set on the connection after it has been established. authentication-type Authentication type of configured security domain. Allowed values currently are (GSS, USERPASSWORD). The default is set on the transport (typically USERPASSWORD). password-pattern Regular expression matched against the connecting user's name that determines if USERPASSWORD authentication is used. password-pattern Takes precedence of over authentication-type. The default is authentication-type. gss-pattern Regular expression matched against the connecting user's name that determines if GSS authentication is used. gss-pattern Takes precedence of over password-pattern. The default is password-pattern. model.visible Used to override the visibility of imported vdb models, where model is the name of the imported model.. include-pg-metadata By default, PG metadata is always added to VDB unless System Properties set property org.teiid.addPGMetadata to false. This property enables adding PG metadata per VDB. Please note that if you are using ODBC to access your VDB, the VDB must include PG metadata. lazy-invalidate By default TTL expiration will be invalidating. Setting lazy-invalidate to true makes ttl refreshes non-invalidating.
[ "<property name=\"security-domain\" value=\"custom-security\" />", "<property name=\"connection.partialResultsMode\" value=\"true\" />" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/vdb_definition_the_vbd_element
Chapter 2. Installing RHEL AI on bare metal
Chapter 2. Installing RHEL AI on bare metal For installing Red Hat Enterprise Linux AI on bare metal, you can use various methods provided in the following procedure to boot and deploy your machine and start interacting with Red Hat Enterprise Linux AI. 2.1. Deploying RHEL AI on bare metal You can deploy Red Hat Enterprise Linux AI with the RHEL AI ISO image in the following ways: * Kickstart * RHEL Graphical User Interface (GUI) This image is bootable on various hardware accelerators. For more information about supported hardware, see "Red Hat Enterprise Linux AI hardware requirements" in "Getting Started" Prerequisites You have downloaded the Red Hat Enterprise Linux AI ISO image from https://access.redhat.com/ . Important Red Hat Enterprise Linux AI currently is only bootable on NVIDIA bare metal hardware. Important Red Hat Enterprise Linux AI requires additional storage for the RHEL AI data as well as the update of image-mode Red Hat Enterprise Linux. The default location for the InstructLab data is in the home/<user> directory. The minimum recommendation for data storage in the /home directory is 1 TB. During updates, the bootc command needs extra space to store temporary data. The minimum storage recommendation for the / path is 120 GB. You need to consider your machine's storage when partitioning the schemes of your disks. Procedure Interactive GUI You can use the interactive Red Hat Enterprise Linux graphical installer and the RHEL AI ISO image to deploy RHEL AI on your machine. For more information about booting RHEL using an ISO file using the GUI, see the Interactively installing RHEL from installation media . Kickstart with embedded container image You can customize the RHEL AI installation by using your own Kickstart file. Create your own Kickstart file with your preferred parameters. For more information about creating Kickstart files, see the Creating Kickstart files in the RHEL documentation. Sample Kickstart file for RHEL AI called rhelai-bootc.ks # use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification # switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow # customize this for your target system network environment network --bootproto=dhcp --device=link --activate # customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs # services can also be customized via Kickstart firewall --disabled services --enabled=sshd # optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user "ssh-ed25519 AAAAC3Nza....." # if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root "ssh-ed25519 AAAAC3Nza..." reboot The sample Kickstart uses the embedded container image in the ISO file, signaled by the ostreecontainer command with the --url=/run/install/repo/container parameter. The bootc switch parameter points to the Red Hat registry for future updates and then you can add your own customizations. You need to embed the Kickstart into the RHEL AI ISO so your machine can restart and deploy RHEL AI. In the following example, rhelai-bootc.ks is the name of the Kickstart file you're embedding into the boot ISO. The mkksiso utility is found in the lorax rpm package. USD mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso where <downloaded-iso-image> Specify the ISO image you downloaded from access.redhat.com . You can then boot your machine using this boot ISO and the installation starts automatically. After the installation is complete, the host reboots and you can login to the new system using the credentials used in the Kickstart file. Important Be aware that having a custom Kickstart in your ISO will automatically start the installation, and disk partitioning, without prompting the user. Based on configuration, the local storage may be completely wiped or overwritten. Kickstart with custom container image You can customize a Kickstart file with your preferred parameters to boot Red Hat Enterprise Linux AI on your machine Create your own Kickstart file with your preferred parameters. For more information on creating Kickstart files, see the Creating Kickstart files in the RHEL documentation. Sample Kickstart file for RHEL AI called rhelai-bootc.ks # customize this for your target system network environment network --bootproto=dhcp --device=link --activate # customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs # customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest # services can also be customized via Kickstart firewall --disabled services --enabled=sshd # optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user "ssh-ed25519 AAAAC3Nza....." # if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root "ssh-ed25519 AAAAC3Nza..." reboot You need to embed the Kickstart into the RHEL AI ISO so your machine can restart and deploy RHEL AI. In the following example, rhelai-bootc.ks is the name of the Kickstart file you're embedding into the boot ISO. The mkksiso utility is found in the lorax rpm package. USD mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso where <downloaded-iso-image> Specify the ISO image you downloaded from access.redhat.com . You can then boot your machine using this boot ISO and the installation starts automatically. After the installation is complete, the host reboots and you can login to the new system using the credentials used in the Kickstart file. Important Be aware that having a custom Kickstart in your ISO will automatically start the installation, and disk partitioning, without prompting the user. Based on configuration, the local storage may be completely wiped or overwritten. Verification To verify that your Red Hat Enterprise Linux AI tools installed correctly, you need to run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train
[ "use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/installing/installing_bare_metal
Chapter 11. EndpointSlice [discovery.k8s.io/v1]
Chapter 11. EndpointSlice [discovery.k8s.io/v1] Description EndpointSlice represents a subset of the endpoints that implement a service. For a given service there may be multiple EndpointSlice objects, selected by labels, which must be joined to produce the full set of endpoints. Type object Required addressType endpoints 11.1. Specification Property Type Description addressType string addressType specifies the type of address carried by this EndpointSlice. All addresses in this slice must be the same type. This field is immutable after creation. The following address types are currently supported: * IPv4: Represents an IPv4 Address. * IPv6: Represents an IPv6 Address. * FQDN: Represents a Fully Qualified Domain Name. Possible enum values: - "FQDN" represents a FQDN. - "IPv4" represents an IPv4 Address. - "IPv6" represents an IPv6 Address. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources endpoints array endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. endpoints[] object Endpoint represents a single logical "backend" implementing a service. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. ports array ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. ports[] object EndpointPort represents a Port used by an EndpointSlice 11.1.1. .endpoints Description endpoints is a list of unique endpoints in this slice. Each slice may include a maximum of 1000 endpoints. Type array 11.1.2. .endpoints[] Description Endpoint represents a single logical "backend" implementing a service. Type object Required addresses Property Type Description addresses array (string) addresses of this endpoint. The contents of this field are interpreted according to the corresponding EndpointSlice addressType field. Consumers must handle different types of addresses in the context of their own capabilities. This must contain at least one address but no more than 100. These are all assumed to be fungible and clients may choose to only use the first element. Refer to: https://issue.k8s.io/106267 conditions object EndpointConditions represents the current condition of an endpoint. deprecatedTopology object (string) deprecatedTopology contains topology information part of the v1beta1 API. This field is deprecated, and will be removed when the v1beta1 API is removed (no sooner than kubernetes v1.24). While this field can hold values, it is not writable through the v1 API, and any attempts to write to it will be silently ignored. Topology information can be found in the zone and nodeName fields instead. hints object EndpointHints provides hints describing how an endpoint should be consumed. hostname string hostname of this endpoint. This field may be used by consumers of endpoints to distinguish endpoints from each other (e.g. in DNS names). Multiple endpoints which use the same hostname should be considered fungible (e.g. multiple A values in DNS). Must be lowercase and pass DNS Label (RFC 1123) validation. nodeName string nodeName represents the name of the Node hosting this endpoint. This can be used to determine endpoints local to a Node. targetRef ObjectReference targetRef is a reference to a Kubernetes object that represents this endpoint. zone string zone is the name of the Zone this endpoint exists in. 11.1.3. .endpoints[].conditions Description EndpointConditions represents the current condition of an endpoint. Type object Property Type Description ready boolean ready indicates that this endpoint is prepared to receive traffic, according to whatever system is managing the endpoint. A nil value indicates an unknown state. In most cases consumers should interpret this unknown state as ready. For compatibility reasons, ready should never be "true" for terminating endpoints, except when the normal readiness behavior is being explicitly overridden, for example when the associated Service has set the publishNotReadyAddresses flag. serving boolean serving is identical to ready except that it is set regardless of the terminating state of endpoints. This condition should be set to true for a ready endpoint that is terminating. If nil, consumers should defer to the ready condition. terminating boolean terminating indicates that this endpoint is terminating. A nil value indicates an unknown state. Consumers should interpret this unknown state to mean that the endpoint is not terminating. 11.1.4. .endpoints[].hints Description EndpointHints provides hints describing how an endpoint should be consumed. Type object Property Type Description forZones array forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. forZones[] object ForZone provides information about which zones should consume this endpoint. 11.1.5. .endpoints[].hints.forZones Description forZones indicates the zone(s) this endpoint should be consumed by to enable topology aware routing. Type array 11.1.6. .endpoints[].hints.forZones[] Description ForZone provides information about which zones should consume this endpoint. Type object Required name Property Type Description name string name represents the name of the zone. 11.1.7. .ports Description ports specifies the list of network ports exposed by each endpoint in this slice. Each port must have a unique name. When ports is empty, it indicates that there are no defined ports. When a port is defined with a nil port value, it indicates "all ports". Each slice may include a maximum of 100 ports. Type array 11.1.8. .ports[] Description EndpointPort represents a Port used by an EndpointSlice Type object Property Type Description appProtocol string The application protocol for this port. This is used as a hint for implementations to offer richer behavior for protocols that they understand. This field follows standard Kubernetes label syntax. Valid values are either: * Un-prefixed protocol names - reserved for IANA standard service names (as per RFC-6335 and https://www.iana.org/assignments/service-names ). * Kubernetes-defined prefixed names: * 'kubernetes.io/h2c' - HTTP/2 prior knowledge over cleartext as described in https://www.rfc-editor.org/rfc/rfc9113.html#name-starting-http-2-with-prior- * 'kubernetes.io/ws' - WebSocket over cleartext as described in https://www.rfc-editor.org/rfc/rfc6455 * 'kubernetes.io/wss' - WebSocket over TLS as described in https://www.rfc-editor.org/rfc/rfc6455 * Other protocols should use implementation-defined prefixed names such as mycompany.com/my-custom-protocol. name string name represents the name of this port. All ports in an EndpointSlice must have a unique name. If the EndpointSlice is derived from a Kubernetes service, this corresponds to the Service.ports[].name. Name must either be an empty string or pass DNS_LABEL validation: * must be no more than 63 characters long. * must consist of lower case alphanumeric characters or '-'. * must start and end with an alphanumeric character. Default is empty string. port integer port represents the port number of the endpoint. If this is not specified, ports are not restricted and must be interpreted in the context of the specific consumer. protocol string protocol represents the IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP. Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol. 11.2. API endpoints The following API endpoints are available: /apis/discovery.k8s.io/v1/endpointslices GET : list or watch objects of kind EndpointSlice /apis/discovery.k8s.io/v1/watch/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices DELETE : delete collection of EndpointSlice GET : list or watch objects of kind EndpointSlice POST : create an EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices GET : watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} DELETE : delete an EndpointSlice GET : read the specified EndpointSlice PATCH : partially update the specified EndpointSlice PUT : replace the specified EndpointSlice /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} GET : watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 11.2.1. /apis/discovery.k8s.io/v1/endpointslices HTTP method GET Description list or watch objects of kind EndpointSlice Table 11.1. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty 11.2.2. /apis/discovery.k8s.io/v1/watch/endpointslices HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 11.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 11.2.3. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices HTTP method DELETE Description delete collection of EndpointSlice Table 11.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind EndpointSlice Table 11.5. HTTP responses HTTP code Reponse body 200 - OK EndpointSliceList schema 401 - Unauthorized Empty HTTP method POST Description create an EndpointSlice Table 11.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.7. Body parameters Parameter Type Description body EndpointSlice schema Table 11.8. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 202 - Accepted EndpointSlice schema 401 - Unauthorized Empty 11.2.4. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices HTTP method GET Description watch individual changes to a list of EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead. Table 11.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 11.2.5. /apis/discovery.k8s.io/v1/namespaces/{namespace}/endpointslices/{name} Table 11.10. Global path parameters Parameter Type Description name string name of the EndpointSlice HTTP method DELETE Description delete an EndpointSlice Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EndpointSlice Table 11.13. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EndpointSlice Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.15. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EndpointSlice Table 11.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.17. Body parameters Parameter Type Description body EndpointSlice schema Table 11.18. HTTP responses HTTP code Reponse body 200 - OK EndpointSlice schema 201 - Created EndpointSlice schema 401 - Unauthorized Empty 11.2.6. /apis/discovery.k8s.io/v1/watch/namespaces/{namespace}/endpointslices/{name} Table 11.19. Global path parameters Parameter Type Description name string name of the EndpointSlice HTTP method GET Description watch changes to an object of kind EndpointSlice. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 11.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/endpointslice-discovery-k8s-io-v1
Chapter 2. OpenShift Container Platform overview
Chapter 2. OpenShift Container Platform overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. It is designed to allow applications and the data centers that support them to expand from just a few machines and applications to thousands of machines that serve millions of clients. OpenShift Container Platform enables you to do the following: Provide developers and IT organizations with cloud application platforms that can be used for deploying applications on secure and scalable resources. Require minimal configuration and management overhead. Bring the Kubernetes platform to customer data centers and cloud. Meet security, privacy, compliance, and governance requirements. With its foundation in Kubernetes, OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. Its implementation in open Red Hat technologies lets you extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments. 2.1. Glossary of common terms for OpenShift Container Platform This glossary defines common Kubernetes and OpenShift Container Platform terms. These terms help you orient yourself with the content and other parts of the documentation. Kubernetes Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. Containers Containers are application instances and components that run in OCI-compliant containers on the worker nodes. A container is the runtime of an Open Container Initiative (OCI)-compliant image. An image is a binary application. A worker node can run many containers. A node capacity is related to memory and CPU capabilities of the underlying resources whether they are cloud, hardware, or virtualized. Pod A pod is one or more containers deployed together on one host. It consists of a colocated group of containers with shared resources such as volumes and IP addresses. A pod is also the smallest compute unit defined, deployed, and managed. In OpenShift Container Platform, pods replace individual application containers as the smallest deployable unit. Pods are the orchestrated unit in OpenShift Container Platform. OpenShift Container Platform schedules and runs all containers in a pod on the same node. Complex applications are made up of many pods, each with their own containers. They interact externally and also with another inside the OpenShift Container Platform environment. Replica set and replication controller The Kubernetes replica set and the OpenShift Container Platform replication controller are both available. The job of this component is to ensure the specified number of pod replicas are running at all times. If pods exit or are deleted, the replica set or replication controller starts more. If more pods are running than needed, the replica set deletes as many as necessary to match the specified number of replicas. Deployment and DeploymentConfig OpenShift Container Platform implements both Kubernetes Deployment objects and OpenShift Container Platform DeploymentConfigs objects. Users may select either. Deployment objects control how an application is rolled out as pods. They identify the name of the container image to be taken from the registry and deployed as a pod on a node. They set the number of replicas of the pod to deploy, creating a replica set to manage the process. The labels indicated instruct the scheduler onto which nodes to deploy the pod. The set of labels is included in the pod definition that the replica set instantiates. Deployment objects are able to update the pods deployed onto the worker nodes based on the version of the Deployment objects and the various rollout strategies for managing acceptable application availability. OpenShift Container Platform DeploymentConfig objects add the additional features of change triggers, which are able to automatically create new versions of the Deployment objects as new versions of the container image are available, or other changes. Service A service defines a logical set of pods and access policies. It provides permanent internal IP addresses and hostnames for other applications to use as pods are created and destroyed. Service layers connect application components together. For example, a front-end web service connects to a database instance by communicating with its service. Services allow for simple internal load balancing across application components. OpenShift Container Platform automatically injects service information into running containers for ease of discovery. Route A route is a way to expose a service by giving it an externally reachable hostname, such as www.example.com. Each route consists of a route name, a service selector, and optionally a security configuration. A router can consume a defined route and the endpoints identified by its service to provide a name that lets external clients reach your applications. While it is easy to deploy a complete multi-tier application, traffic from anywhere outside the OpenShift Container Platform environment cannot reach the application without the routing layer. Build A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform leverages Kubernetes by creating containers from build images and pushing them to the integrated registry. Project OpenShift Container Platform uses projects to allow groups of users or developers to work together, serving as the unit of isolation and collaboration. It defines the scope of resources, allows project administrators and collaborators to manage resources, and restricts and tracks the user's resources with quotas and limits. A project is a Kubernetes namespace with additional annotations. It is the central vehicle for managing access to resources for regular users. A project lets a community of users organize and manage their content in isolation from other communities. Users must receive access to projects from administrators. But cluster administrators can allow developers to create their own projects, in which case users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Projects are also known as namespaces. Operators An Operator is a Kubernetes-native application. The goal of an Operator is to put operational knowledge into software. Previously this knowledge only resided in the minds of administrators, various combinations or shell scripts or automation software such as Ansible. It was outside your Kubernetes cluster and hard to integrate. With Operators, all of this changes. Operators are purpose-built for your applications. They implement and automate common Day 1 activities such as installation and configuration as well as Day 2 activities such as scaling up and down, reconfiguration, updates, backups, fail overs, and restores in a piece of software running inside your Kubernetes cluster by integrating natively with Kubernetes concepts and APIs. This is called a Kubernetes-native application. With Operators, applications must not be treated as a collection of primitives, such as pods, deployments, services, or config maps. Instead, Operators should be treated as a single object that exposes the options that make sense for the application. 2.2. Understanding OpenShift Container Platform OpenShift Container Platform is a Kubernetes environment for managing the lifecycle of container-based applications and their dependencies on various computing platforms, such as bare metal, virtualized, on-premise, and in cloud. OpenShift Container Platform deploys, configures and manages containers. OpenShift Container Platform offers usability, stability, and customization of its components. OpenShift Container Platform utilises a number of computing resources, known as nodes. A node has a lightweight, secure operating system based on Red Hat Enterprise Linux (RHEL), known as Red Hat Enterprise Linux CoreOS (RHCOS). After a node is booted and configured, it obtains a container runtime, such as CRI-O or Docker, for managing and running the images of container workloads scheduled to it. The Kubernetes agent, or kubelet schedules container workloads on the node. The kubelet is responsible for registering the node with the cluster and receiving the details of container workloads. OpenShift Container Platform configures and manages the networking, load balancing and routing of the cluster. OpenShift Container Platform adds cluster services for monitoring the cluster health and performance, logging, and for managing upgrades. The container image registry and OperatorHub provide Red Hat certified products and community built softwares for providing various application services within the cluster. These applications and services manage the applications deployed in the cluster, databases, frontends and user interfaces, application runtimes and business automation, and developer services for development and testing of container applications. You can manage applications within the cluster either manually by configuring deployments of containers running from pre-built images or through resources known as Operators. You can build custom images from pre-build images and source code, and store these custom images locally in an internal, private or public registry. The Multicluster Management layer can manage multiple clusters including their deployment, configuration, compliance and distribution of workloads in a single console. 2.3. Installing OpenShift Container Platform The OpenShift Container Platform installation program offers you flexibility. You can use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains or deploy a cluster on infrastructure that you prepare and maintain. For more information about the installation process, the supported platforms, and choosing a method of installing and preparing your cluster, see the following: OpenShift Container Platform installation overview Installation process Supported platforms for OpenShift Container Platform clusters Selecting a cluster installation type 2.3.1. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 2.4. Steps 2.4.1. For developers Develop and deploy containerized applications with OpenShift Container Platform. OpenShift Container Platform is a platform for developing and deploying containerized applications. OpenShift Container Platform documentation helps you: Understand OpenShift Container Platform development : Learn the different types of containerized applications, from simple containers to advanced Kubernetes deployments and Operators. Work with projects : Create projects from the OpenShift Container Platform web console or OpenShift CLI ( oc ) to organize and share the software you develop. Work with applications : Use the Developer perspective in the OpenShift Container Platform web console to create and deploy applications . Use the Topology view to see your applications, monitor status, connect and group components, and modify your code base. Use the developer CLI tool ( odo ) : The odo CLI tool lets developers create single or multi-component applications and automates deployment, build, and service route configurations. It abstracts complex Kubernetes and OpenShift Container Platform concepts, allowing you to focus on developing your applications. Create CI/CD Pipelines : Pipelines are serverless, cloud-native, continuous integration, and continuous deployment systems that run in isolated containers. They use standard Tekton custom resources to automate deployments and are designed for decentralized teams working on microservices-based architecture. Deploy Helm charts : Helm 3 is a package manager that helps developers define, install, and update application packages on Kubernetes. A Helm chart is a packaging format that describes an application that can be deployed using the Helm CLI. Understand image builds : Choose from different build strategies (Docker, S2I, custom, and pipeline) that can include different kinds of source materials (Git repositories, local binary inputs, and external artifacts). Then, follow examples of build types from basic builds to advanced builds. Create container images : A container image is the most basic building block in OpenShift Container Platform (and Kubernetes) applications. Defining image streams lets you gather multiple versions of an image in one place as you continue its development. S2I containers let you insert your source code into a base container that is set up to run code of a particular type, such as Ruby, Node.js, or Python. Create deployments : Use Deployment and DeploymentConfig objects to exert fine-grained management over applications. Manage deployments using the Workloads page or OpenShift CLI ( oc ). Learn rolling, recreate, and custom deployment strategies. Create templates : Use existing templates or create your own templates that describe how an application is built or deployed. A template can combine images with descriptions, parameters, replicas, exposed ports and other content that defines how an application can be run or built. Understand Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.10. Learn about the Operator Framework and how to deploy applications using installed Operators into your projects. Develop Operators : Operators are the preferred method for creating on-cluster applications for OpenShift Container Platform 4.10. Learn the workflow for building, testing, and deploying Operators. Then, create your own Operators based on Ansible or Helm , or configure built-in Prometheus monitoring using the Operator SDK. REST API reference : Learn about OpenShift Container Platform application programming interface endpoints. 2.4.2. For administrators Understand OpenShift Container Platform management : Learn about components of the OpenShift Container Platform 4.10 control plane. See how OpenShift Container Platform control plane and worker nodes are managed and updated through the Machine API and Operators . Manage users and groups : Add users and groups with different levels of permissions to use or modify clusters. Manage authentication : Learn how user, group, and API authentication works in OpenShift Container Platform. OpenShift Container Platform supports multiple identity providers. Manage networking : The cluster network in OpenShift Container Platform is managed by the Cluster Network Operator (CNO). The CNO uses iptables rules in kube-proxy to direct traffic between nodes and pods running on those nodes. The Multus Container Network Interface adds the capability to attach multiple network interfaces to a pod. Using network policy features, you can isolate your pods or permit selected traffic. Manage storage : OpenShift Container Platform allows cluster administrators to configure persistent storage. Manage Operators : Lists of Red Hat, ISV, and community Operators can be reviewed by cluster administrators and installed on their clusters . After you install them, you can run , upgrade , back up, or otherwise manage the Operator on your cluster. Use custom resource definitions (CRDs) to modify the cluster : Cluster features implemented with Operators can be modified with CRDs. Learn to create a CRD and manage resources from CRDs . Set resource quotas : Choose from CPU, memory, and other system resources to set quotas . Prune and reclaim resources : Reclaim space by pruning unneeded Operators, groups, deployments, builds, images, registries, and cron jobs. Scale and tune clusters : Set cluster limits, tune nodes, scale cluster monitoring, and optimize networking, storage, and routes for your environment. Understanding the OpenShift Update Service : Learn about installing and managing a local OpenShift Update Service for recommending OpenShift Container Platform updates in disconnected environments. Monitor clusters : Learn to configure the monitoring stack . After configuring monitoring, use the web console to access monitoring dashboards . In addition to infrastructure metrics, you can also scrape and view metrics for your own services. Remote health monitoring : OpenShift Container Platform collects anonymized aggregated information about your cluster. Using Telemetry and the Insights Operator, this data is received by Red Hat and used to improve OpenShift Container Platform. You can view the data collected by remote health monitoring .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/getting_started/openshift-overview
Chapter 5. TemplateInstance [template.openshift.io/v1]
Chapter 5. TemplateInstance [template.openshift.io/v1] Description TemplateInstance requests and records the instantiation of a Template. TemplateInstance is part of an experimental API. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object TemplateInstanceSpec describes the desired state of a TemplateInstance. status object TemplateInstanceStatus describes the current state of a TemplateInstance. 5.1.1. .spec Description TemplateInstanceSpec describes the desired state of a TemplateInstance. Type object Required template Property Type Description requester object TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. secret LocalObjectReference secret is a reference to a Secret object containing the necessary template parameters. template object Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 5.1.2. .spec.requester Description TemplateInstanceRequester holds the identity of an agent requesting a template instantiation. Type object Property Type Description extra object extra holds additional information provided by the authenticator. extra{} array (string) groups array (string) groups represent the groups this user is a part of. uid string uid is a unique value that identifies this user across time; if this user is deleted and another user by the same name is added, they will have different UIDs. username string username uniquely identifies this user among all active users. 5.1.3. .spec.requester.extra Description extra holds additional information provided by the authenticator. Type object 5.1.4. .spec.template Description Template contains the inputs needed to produce a Config. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required objects Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds labels object (string) labels is a optional set of labels that are applied to every object during the Template to Config transformation. message string message is an optional instructional message that will be displayed when this template is instantiated. This field should inform the user how to utilize the newly created resources. Parameter substitution will be performed on the message before being displayed so that generated credentials and other parameters can be included in the output. metadata ObjectMeta objects array (RawExtension) objects is an array of resources to include in this template. If a namespace value is hardcoded in the object, it will be removed during template instantiation, however if the namespace value is, or contains, a USD{PARAMETER_REFERENCE}, the resolved value after parameter substitution will be respected and the object will be created in that namespace. parameters array parameters is an optional array of Parameters used during the Template to Config transformation. parameters[] object Parameter defines a name/value variable that is to be processed during the Template to Config transformation. 5.1.5. .spec.template.parameters Description parameters is an optional array of Parameters used during the Template to Config transformation. Type array 5.1.6. .spec.template.parameters[] Description Parameter defines a name/value variable that is to be processed during the Template to Config transformation. Type object Required name Property Type Description description string Description of a parameter. Optional. displayName string Optional: The name that will show in UI instead of parameter 'Name' from string From is an input value for the generator. Optional. generate string generate specifies the generator to be used to generate random string from an input value specified by From field. The result string is stored into Value field. If empty, no generator is being used, leaving the result Value untouched. Optional. The only supported generator is "expression", which accepts a "from" value in the form of a simple regular expression containing the range expression "[a-zA-Z0-9]", and the length expression "a{length}". Examples: from | value ----------------------------- "test[0-9]{1}x" | "test7x" "[0-1]{8}" | "01001100" "0x[A-F0-9]{4}" | "0xB3AF" "[a-zA-Z0-9]{8}" | "hW4yQU5i" name string Name must be set and it can be referenced in Template Items using USD{PARAMETER_NAME}. Required. required boolean Optional: Indicates the parameter must have a value. Defaults to false. value string Value holds the Parameter data. If specified, the generator will be ignored. The value replaces all occurrences of the Parameter USD{Name} expression during the Template to Config transformation. Optional. 5.1.7. .status Description TemplateInstanceStatus describes the current state of a TemplateInstance. Type object Property Type Description conditions array conditions represent the latest available observations of a TemplateInstance's current state. conditions[] object TemplateInstanceCondition contains condition information for a TemplateInstance. objects array Objects references the objects created by the TemplateInstance. objects[] object TemplateInstanceObject references an object created by a TemplateInstance. 5.1.8. .status.conditions Description conditions represent the latest available observations of a TemplateInstance's current state. Type array 5.1.9. .status.conditions[] Description TemplateInstanceCondition contains condition information for a TemplateInstance. Type object Required type status lastTransitionTime reason message Property Type Description lastTransitionTime Time LastTransitionTime is the last time a condition status transitioned from one state to another. message string Message is a human readable description of the details of the last transition, complementing reason. reason string Reason is a brief machine readable explanation for the condition's last transition. status string Status of the condition, one of True, False or Unknown. type string Type of the condition, currently Ready or InstantiateFailure. 5.1.10. .status.objects Description Objects references the objects created by the TemplateInstance. Type array 5.1.11. .status.objects[] Description TemplateInstanceObject references an object created by a TemplateInstance. Type object Property Type Description ref ObjectReference ref is a reference to the created object. When used under .spec, only name and namespace are used; these can contain references to parameters which will be substituted following the usual rules. 5.2. API endpoints The following API endpoints are available: /apis/template.openshift.io/v1/templateinstances GET : list or watch objects of kind TemplateInstance /apis/template.openshift.io/v1/watch/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances DELETE : delete collection of TemplateInstance GET : list or watch objects of kind TemplateInstance POST : create a TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances GET : watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} DELETE : delete a TemplateInstance GET : read the specified TemplateInstance PATCH : partially update the specified TemplateInstance PUT : replace the specified TemplateInstance /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} GET : watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status GET : read status of the specified TemplateInstance PATCH : partially update status of the specified TemplateInstance PUT : replace status of the specified TemplateInstance 5.2.1. /apis/template.openshift.io/v1/templateinstances Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.2. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty 5.2.2. /apis/template.openshift.io/v1/watch/templateinstances Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of TemplateInstance Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind TemplateInstance Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK TemplateInstanceList schema 401 - Unauthorized Empty HTTP method POST Description create a TemplateInstance Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body TemplateInstance schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 202 - Accepted TemplateInstance schema 401 - Unauthorized Empty 5.2.4. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a TemplateInstance Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified TemplateInstance Table 5.23. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified TemplateInstance Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified TemplateInstance Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body TemplateInstance schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty 5.2.6. /apis/template.openshift.io/v1/watch/namespaces/{namespace}/templateinstances/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind TemplateInstance. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.7. /apis/template.openshift.io/v1/namespaces/{namespace}/templateinstances/{name}/status Table 5.33. Global path parameters Parameter Type Description name string name of the TemplateInstance namespace string object name and auth scope, such as for teams and projects Table 5.34. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified TemplateInstance Table 5.35. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified TemplateInstance Table 5.36. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.37. Body parameters Parameter Type Description body Patch schema Table 5.38. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified TemplateInstance Table 5.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.40. Body parameters Parameter Type Description body TemplateInstance schema Table 5.41. HTTP responses HTTP code Reponse body 200 - OK TemplateInstance schema 201 - Created TemplateInstance schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/template_apis/templateinstance-template-openshift-io-v1
Chapter 12. IdM Directory Server RFC support
Chapter 12. IdM Directory Server RFC support The Directory Server component in Identity Management (IdM) supports many LDAP-related Requests for Comments (RFCs). Additional resources Directory Server RFC support Planning and designing Directory Server
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/planning_identity_management/ref_idm-directory-server-rfc-support_planning-identity-management
Configuring basic system settings
Configuring basic system settings Red Hat Enterprise Linux 8 Set up the essential functions of your system and customize your system environment Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/index
Chapter 12. Clustered Samba Configuration
Chapter 12. Clustered Samba Configuration As of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On provides support for running Clustered Samba in an active/active configuration. This requires that you install and configure CTDB on all nodes in a cluster, which you use in conjunction with GFS2 clustered file systems. Note Red Hat Enterprise Linux 6 supports a maximum of four nodes running clustered Samba. This chapter describes the procedure for configuring CTDB by configuring an example system. For information on configuring GFS2 file systems, see Global File System 2 . For information on configuring logical volumes, see Logical Volume Manager Administration . Note Simultaneous access to the data in the Samba share from outside of Samba is not supported. 12.1. CTDB Overview CTDB is a cluster implementation of the TDB database used by Samba. To use CTDB, a clustered file system must be available and shared on all nodes in the cluster. CTDB provides clustered features on top of this clustered file system. As of the Red Hat Enterprise Linux 6.2 release, CTDB also runs a cluster stack in parallel to the one provided by Red Hat Enterprise Linux clustering. CTDB manages node membership, recovery/failover, IP relocation and Samba services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-clustered-samba-ca
Chapter 2. Ceph block devices
Chapter 2. Ceph block devices As a storage administrator, being familiar with Ceph's block device commands can help you effectively manage the Red Hat Ceph Storage cluster. You can create and manage block devices pools and images, along with enabling and disabling the various features of Ceph block devices. Prerequisites A running Red Hat Ceph Storage cluster. 2.1. Displaying the command help Display command, and sub-command online help from the command-line interface. Note The -h option still displays help for all available commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Use the rbd help command to display help for a particular rbd command and its subcommand: Syntax To display help for the snap list command: 2.2. Creating a block device pool Before using the block device client, ensure a pool for rbd exists, is enabled and initialized. Note You MUST create a pool first before you can specify it as a source. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To create an rbd pool, execute the following: Syntax Example Additional Resources See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for additional details. 2.3. Creating a block device image Before adding a block device to a node, create an image for it in the Ceph storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To create a block device image, execute the following command: Syntax Example This example creates a 1 GB image named image1 that stores information in a pool named pool1 . Note Ensure the pool exists before creating an image. Additional Resources See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for additional details. 2.4. Listing the block device images List the block device images. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To list block devices in the rbd pool, execute the following command: Note rbd is the default pool name. Example To list block devices in a specific pool: Syntax Example 2.5. Retrieving the block device image information Retrieve information on the block device image. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To retrieve information from a particular image in the default rbd pool, run the following command: Syntax Example To retrieve information from an image within a pool: Syntax Example 2.6. Resizing a block device image Ceph block device images are thin-provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size option. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To increase or decrease the maximum size of a Ceph block device image for the default rbd pool: Syntax Example To increase or decrease the maximum size of a Ceph block device image for a specific pool: Syntax Example 2.7. Removing a block device image Remove a block device image. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To remove a block device from the default rbd pool: Syntax Example To remove a block device from a specific pool: Syntax Example 2.8. Moving a block device image to the trash RADOS Block Device (RBD) images can be moved to the trash using the rbd trash command. This command provides more options than the rbd rm command. Once an image is moved to the trash, it can be removed from the trash at a later time. This helps to avoid accidental deletion. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To move an image to the trash execute the following: Syntax Example Once an image is in the trash, a unique image ID is assigned. Note You need this image ID to specify the image later if you need to use any of the trash options. Execute the rbd trash list POOL_NAME for a list of IDs of the images in the trash. This command also returns the image's pre-deletion name. In addition, there is an optional --image-id argument that can be used with rbd info and rbd snap commands. Use --image-id with the rbd info command to see the properties of an image in the trash, and with rbd snap to remove an image's snapshots from the trash. To remove an image from the trash execute the following: Syntax Example Important Once an image is removed from the trash, it cannot be restored. Execute the rbd trash restore command to restore the image: Syntax Example To remove all expired images from trash: Syntax Example 2.9. Defining an automatic trash purge schedule You can schedule periodic trash purge operations on a pool. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To add a trash purge schedule, execute: Syntax Example To list the trash purge schedule, execute: Syntax Example To know the status of trash purge schedule, execute: Example To remove the trash purge schedule, execute: Syntax Example 2.10. Enabling and disabling image features The block device images, such as fast-diff , exclusive-lock , object-map , or deep-flatten , are enabled by default. You can enable or disable these image features on already existing images. Note The deep flatten feature can be only disabled on already existing images but not enabled. To use deep flatten , enable it when creating images. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Retrieve information from a particular image in a pool: Syntax Example Enable a feature: Syntax To enable the exclusive-lock feature on the image1 image in the pool1 pool: Example Important If you enable the fast-diff and object-map features, then rebuild the object map: + .Syntax Disable a feature: Syntax To disable the fast-diff feature on the image1 image in the pool1 pool: Example 2.11. Working with image metadata Ceph supports adding custom image metadata as key-value pairs. The pairs do not have any strict format. Also, by using metadata, you can set the RADOS Block Device (RBD) configuration parameters for particular images. Use the rbd image-meta commands to work with metadata. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To set a new metadata key-value pair: Syntax Example This example sets the last_update key to the 2021-06-06 value on the image1 image in the pool1 pool. To view a value of a key: Syntax Example This example views the value of the last_update key. To show all metadata on an image: Syntax Example This example lists the metadata set for the image1 image in the pool1 pool. To remove a metadata key-value pair: Syntax Example This example removes the last_update key-value pair from the image1 image in the pool1 pool. To override the RBD image configuration settings set in the Ceph configuration file for a particular image: Syntax Example This example disables the RBD cache for the image1 image in the pool1 pool. Additional Resources See the Block device general options section in the Red Hat Ceph Storage Block Device Guide for a list of possible configuration options. 2.12. Moving images between pools You can move RADOS Block Device (RBD) images between different pools within the same cluster. During this process, the source image is copied to the target image with all snapshot history and optionally with link to the source image's parent to help preserve sparseness. The source image is read only, the target image is writable. The target image is linked to the source image while the migration is in progress. You can safely run this process in the background while the new target image is in use. However, stop all clients using the target image before the preparation step to ensure that clients using the image are updated to point to the new target image. Important The krbd kernel module does not support live migration at this time. Prerequisites Stop all clients that use the source image. Root-level access to the client node. Procedure Prepare for migration by creating the new target image that cross-links the source and target images: Syntax Replace: SOURCE_IMAGE with the name of the image to be moved. Use the POOL / IMAGE_NAME format. TARGET_IMAGE with the name of the new image. Use the POOL / IMAGE_NAME format. Example Verify the state of the new target image, which is supposed to be prepared : Syntax Example Optionally, restart the clients using the new target image name. Copy the source image to target image: Syntax Example Ensure that the migration is completed: Example Commit the migration by removing the cross-link between the source and target images, and this also removes the source image: Syntax Example If the source image is a parent of one or more clones, use the --force option after ensuring that the clone images are not in use: Example If you did not restart the clients after the preparation step, restart them using the new target image name. 2.13. Migrating pools You can migrate or copy RADOS Block Device (RBD) images. During this process, the source image is exported and then imported. Important Use this migration process if the workload contains only RBD images. No rados cppool images can exist in the workload. If rados cppool images exist in the workload, see Migrating a pool in the Storage Strategies Guide . Important While running the export and import commands, be sure that there is no active I/O in the related RBD images. It is recommended to take production down during this pool migration time. Prerequisites Stop all active I/O in the RBD images which are being exported and imported. Root-level access to the client node. Procedure Migrate the volume. Syntax Example If using the local drive for import or export is necessary, the commands can be divided, first exporting to a local drive and then importing the files to a new pool. Syntax Example 2.14. The rbdmap service The systemd unit file, rbdmap.service , is included with the ceph-common package. The rbdmap.service unit executes the rbdmap shell script. This script automates the mapping and unmapping of RADOS Block Devices (RBD) for one or more RBD images. The script can be ran manually at any time, but the typical use case is to automatically mount RBD images at boot time, and unmount at shutdown. The script takes a single argument, which can be either map , for mounting or unmap , for unmounting RBD images. The script parses a configuration file, the default is /etc/ceph/rbdmap , but can be overridden using an environment variable called RBDMAPFILE . Each line of the configuration file corresponds to an RBD image. The format of the configuration file format is as follows: IMAGE_SPEC RBD_OPTS Where IMAGE_SPEC specifies the POOL_NAME / IMAGE_NAME , or just the IMAGE_NAME , in which case the POOL_NAME defaults to rbd . The RBD_OPTS is an optional list of options to be passed to the underlying rbd map command. These parameters and their values should be specified as a comma-separated string: OPT1 = VAL1 , OPT2 = VAL2 ,... , OPT_N = VAL_N This will cause the script to issue an rbd map command like the following: Syntax Note For options and values which contain commas or equality signs, a simple apostrophe can be used to prevent replacing them. When successful, the rbd map operation maps the image to a /dev/rbdX device, at which point a udev rule is triggered to create a friendly device name symlink, for example, /dev/rbd/ POOL_NAME / IMAGE_NAME , pointing to the real mapped device. For mounting or unmounting to succeed, the friendly device name must have a corresponding entry in /etc/fstab file. When writing /etc/fstab entries for RBD images, it is a good idea to specify the noauto or nofail mount option. This prevents the init system from trying to mount the device too early, before the device exists. Additional Resources See the rbd manpage for a full list of possible options. 2.15. Configuring the rbdmap service To automatically map and mount, or unmap and unmount, RADOS Block Devices (RBD) at boot time, or at shutdown respectively. Prerequisites Root-level access to the node doing the mounting. Installation of the ceph-common package. Procedure Open for editing the /etc/ceph/rbdmap configuration file. Add the RBD image or images to the configuration file: Example Save changes to the configuration file. Enable the RBD mapping service: Example Additional Resources See the The rbdmap service section of the Red Hat Ceph Storage Block Device Guide for more details on the RBD system service. 2.16. Persistent Write Log Cache In a Red Hat Ceph Storage cluster, Persistent Write Log (PWL) cache provides a persistent, fault-tolerant write-back cache for librbd-based RBD clients. PWL cache uses a log-ordered write-back design which maintains checkpoints internally so that writes that get flushed back to the cluster are always crash consistent. If the client cache is lost entirely, the disk image is still consistent but the data appears stale. You can use PWL cache with persistent memory (PMEM) or solid-state disks (SSD) as cache devices. For PMEM, the cache mode is replica write log (RWL) and for SSD, the cache mode is (SSD). Currently, PWL cache supports RWL and SSD modes and is disabled by default. Primary benefits of PWL cache are: PWL cache can provide high performance when the cache is not full. The larger the cache, the longer the duration of high performance. PWL cache provides persistence and is not much slower than RBD cache. RBD cache is faster but volatile and cannot guarantee data order and persistence. In a steady state, where the cache is full, performance is affected by the number of I/Os in flight. For example, PWL can provide higher performance at low io_depth, but at high io_depth, such as when the number of I/Os is greater than 32, the performance is often worse than that in cases without cache. Use cases for PMEM caching are: Different from RBD cache, PWL cache has non-volatile characteristics and is used in scenarios where you do not want data loss and need performance. RWL mode provides low latency. It has a stable low latency for burst I/Os and it is suitable for those scenarios with high requirements for stable low latency. RWL mode also has high continuous and stable performance improvement in scenarios with low I/O depth or not too much inflight I/O. Use case for SSD caching is: The advantages of SSD mode are similar to RWL mode. SSD hardware is relatively cheap and popular, but its performance is slightly lower than PMEM. 2.17. Persistent write log cache limitations When using Persistent Write Log (PWL) cache, there are several limitations that should be considered. The underlying implementation of persistent memory (PMEM) and solid-state disks (SSD) is different, with PMEM having higher performance. At present, PMEM can provide "persist on write" and SSD is "persist on flush or checkpoint". In future releases, these two modes will be configurable. When users switch frequently and open and close images repeatedly, Ceph displays poor performance. If PWL cache is enabled, the performance is worse. It is not recommended to set num_jobs in a Flexible I/O (fio) test, but instead setup multiple jobs to write different images. 2.18. Enabling persistent write log cache You can enable persistent write log cache (PWL) on a Red Hat Ceph Storage cluster by setting the Ceph RADOS block device (RBD) rbd_persistent_cache_mode and rbd_plugins options. Important The exclusive-lock feature must be enabled to enable persistent write log cache. The cache can be loaded only after the exclusive-lock is acquired. Exclusive-locks are enabled on newly created images by default unless overridden by the rbd_default_features configuration option or the --image-feature flag for the rbd create command. See the Enabling and disabling image features section for more details on the exclusive-lock feature. Set the persistent write log cache options at the host level by using the ceph config set command. Set the persistent write log cache options at the pool or image level by using the rbd config pool set or the rbd config image set commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. The exclusive-lock feature is enabled. Client-side disks are persistent memory (PMEM) or solid-state disks (SSD). RBD cache is disabled. Procedure Enable PWL cache: At the host level, use the ceph config set command: Syntax Replace CACHE_MODE with rwl or ssd . Example At the pool level, use the rbd config pool set command: Syntax Replace CACHE_MODE with rwl or ssd . Example At the image level, use the rbd config image set command: Syntax Replace CACHE_MODE with rwl or ssd . Example Optional: Set the additional RBD options at the host, the pool, or the image level: Syntax 1 rbd_persistent_cache_path - A file folder to cache data that must have direct access (DAX) enabled when using the rwl mode to avoid performance degradation. 2 rbd_persistent_cache_size - The cache size per image, with a minimum cache size of 1 GB. The larger the cache size, the better the performance. Setting additional RBD options for rwl mode: Example Setting additional RBD options for ssd mode: Example Additional Resources See the Direct Access for files article on kernel.org for more details on using DAX. 2.19. Checking persistent write log cache status You can check the status of the Persistent Write Log (PWL) cache. The cache is used when an exclusive lock is acquired, and when the exclusive-lock is released, the persistent write log cache is closed. The cache status shows information about the cache size, location, type, and other cache-related information. Updates to the cache status are done when the cache is opened and closed. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. A running process with PWL cache enabled. Procedure View the PWL cache status: Syntax Example 2.20. Flushing persistent write log cache You can flush the cache file with the rbd command, specifying persistent-cache flush , the pool name, and the image name before discarding the persistent write log (PWL) cache. The flush command can explicitly write cache files back to the OSDs. If there is a cache interruption or the application dies unexpectedly, all the entries in the cache are flushed to the OSDs so that you can manually flush the data and then invalidate the cache. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. PWL cache is enabled. Procedure Flush the PWL cache: Syntax Example Additional Resources See the Discarding persistent write log cache section in the Red Hat Ceph Storage Block Device Guide for more details. 2.21. Discarding persistent write log cache You might need to manually discard the Persistent Write Log (PWL) cache, for example, if the data in the cache has expired. You can discard a cache file for an image by using the rbd persistent-cache invalidate command. The command removes the cache metadata for the specified image, disables the cache feature, and deletes the local cache file, if it exists. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. PWL cache is enabled. Procedure Discard PWL cache: Syntax Example 2.22. Monitoring performance of Ceph Block Devices using the command-line interface Starting with Red Hat Ceph Storage 4.1, a performance metrics gathering framework is integrated within the Ceph OSD and Manager components. This framework provides a built-in method to generate and process performance metrics upon which other Ceph Block Device performance monitoring solutions are built. A new Ceph Manager module, rbd_support , aggregates the performance metrics when enabled. The rbd command has two new actions: iotop and iostat . Note The initial use of these actions can take around 30 seconds to populate the data fields. Prerequisites User-level access to a Ceph Monitor node. Procedure Ensure the rbd_support Ceph Manager module is enabled: Example To display an "iotop"-style of images: Example Note The write ops, read-ops, write-bytes, read-bytes, write-latency, and read-latency columns can be sorted dynamically by using the right and left arrow keys. To display an "iostat"-style of images: Example Note The output from this command can be in JSON or XML format, and then can be sorted using other command-line tools.
[ "rbd help COMMAND SUBCOMMAND", "rbd help snap list", "ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME", "ceph osd pool create pool1 ceph osd pool application enable pool1 rbd rbd pool init -p pool1", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME", "rbd create image1 --size 1024 --pool pool1", "rbd ls", "rbd ls POOL_NAME", "rbd ls pool1", "rbd --image IMAGE_NAME info", "rbd --image image1 info", "rbd --image IMAGE_NAME -p POOL_NAME info", "rbd --image image1 -p pool1 info", "rbd resize --image IMAGE_NAME --size SIZE", "rbd resize --image image1 --size 1024", "rbd resize --image POOL_NAME / IMAGE_NAME --size SIZE", "rbd resize --image pool1/image1 --size 1024", "rbd rm IMAGE_NAME", "rbd rm image1", "rbd rm IMAGE_NAME -p POOL_NAME", "rbd rm image1 -p pool1", "rbd trash mv [ POOL_NAME /] IMAGE_NAME", "rbd trash mv pool1/image1", "rbd trash rm [ POOL_NAME /] IMAGE_ID", "rbd trash rm pool1/d35ed01706a0", "rbd trash restore [ POOL_NAME /] IMAGE_ID", "rbd trash restore pool1/d35ed01706a0", "rbd trash purge POOL_NAME", "rbd trash purge pool1 Removing images: 100% complete...done.", "rbd trash purge schedule add --pool POOL_NAME INTERVAL", "rbd trash purge schedule add --pool pool1 10m", "rbd trash purge schedule ls --pool POOL_NAME", "rbd trash purge schedule ls --pool pool1 every 10m", "rbd trash purge schedule status POOL NAMESPACE SCHEDULE TIME pool1 2021-08-02 11:50:00", "rbd trash purge schedule remove --pool POOL_NAME INTERVAL", "rbd trash purge schedule remove --pool pool1 10m", "rbd --image POOL_NAME / IMAGE_NAME info", "rbd --image pool1/image1 info", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature enable pool1/image1 exclusive-lock", "rbd object-map rebuild POOL_NAME / IMAGE_NAME", "rbd feature disable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature disable pool1/image1 fast-diff", "rbd image-meta set POOL_NAME / IMAGE_NAME KEY VALUE", "rbd image-meta set pool1/image1 last_update 2021-06-06", "rbd image-meta get POOL_NAME / IMAGE_NAME KEY", "rbd image-meta get pool1/image1 last_update", "rbd image-meta list POOL_NAME / IMAGE_NAME", "rbd image-meta list pool1/image1", "rbd image-meta remove POOL_NAME / IMAGE_NAME KEY", "rbd image-meta remove pool1/image1 last_update", "rbd config image set POOL_NAME / IMAGE_NAME PARAMETER VALUE", "rbd config image set pool1/image1 rbd_cache false", "rbd migration prepare SOURCE_IMAGE TARGET_IMAGE", "rbd migration prepare pool1/image1 pool2/image2", "rbd status TARGET_IMAGE", "rbd status pool2/image2 Watchers: none Migration: source: pool1/image1 (5e2cba2f62e) destination: pool2/image2 (5e2ed95ed806) state: prepared", "rbd migration execute TARGET_IMAGE", "rbd migration execute pool2/image2", "rbd status pool2/image2 Watchers: watcher=1.2.3.4:0/3695551461 client.123 cookie=123 Migration: source: pool1/image1 (5e2cba2f62e) destination: pool2/image2 (5e2ed95ed806) state: executed", "rbd migration commit TARGET_IMAGE", "rbd migration commit pool2/image2", "rbd migration commit pool2/image2 --force", "rbd export volumes/ VOLUME_NAME - | rbd import --image-format 2 - volumes_new/ VOLUME_NAME", "rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16 - | rbd import --image-format 2 - volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16", "rbd export volume/ VOLUME_NAME FILE_PATH rbd import --image-format 2 FILE_PATH volumes_new/ VOLUME_NAME", "rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16 <path of export file> rbd import --image-format 2 <path> volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16", "rbd map POOLNAME / IMAGE_NAME -- OPT1 VAL1 -- OPT2 VAL2", "foo/bar1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring foo/bar2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'", "systemctl enable rbdmap.service", "ceph config set client rbd_persistent_cache_mode CACHE_MODE ceph config set client rbd_plugins pwl_cache", "ceph config set client rbd_persistent_cache_mode ssd ceph config set client rbd_plugins pwl_cache", "rbd config pool set POOL_NAME rbd_persistent_cache_mode CACHE_MODE rbd config pool set POOL_NAME rbd_plugins pwl_cache", "rbd config pool set pool1 rbd_persistent_cache_mode ssd rbd config pool set pool1 rbd_plugins pwl_cache", "rbd config image set POOL_NAME / IMAGE_NAME rbd_persistent_cache_mode CACHE_MODE rbd config image set POOL_NAME / IMAGE_NAME rbd_plugins pwl_cache", "rbd config image set pool1/image1 rbd_persistent_cache_mode ssd rbd config image set pool1/image1 rbd_plugins pwl_cache", "rbd_persistent_cache_mode CACHE_MODE rbd_plugins pwl_cache rbd_persistent_cache_path / PATH_TO_CACHE_DIRECTORY 1 rbd_persistent_cache_size PERSISTENT_CACHE_SIZE 2", "rbd_cache false rbd_persistent_cache_mode rwl rbd_plugins pwl_cache rbd_persistent_cache_path /mnt/pmem/cache/ rbd_persistent_cache_size 1073741824", "rbd_cache false rbd_persistent_cache_mode ssd rbd_plugins pwl_cache rbd_persistent_cache_path /mnt/nvme/cache rbd_persistent_cache_size 1073741824", "rbd status POOL_NAME / IMAGE_NAME", "rbd status pool1/image1 Watchers: watcher=10.10.0.102:0/1061883624 client.25496 cookie=140338056493088 Persistent cache state: host: host02 path: /mnt/nvme0/rbd-pwl.rbd.101e5824ad9a.pool size: 1 GiB mode: ssd stats_timestamp: Mon Apr 18 13:26:32 2022 present: true empty: false clean: false allocated: 509 MiB cached: 501 MiB dirty: 338 MiB free: 515 MiB hits_full: 1450 / 61% hits_partial: 0 / 0% misses: 924 hit_bytes: 192 MiB / 66% miss_bytes: 97 MiB", "rbd persistent-cache flush POOL_NAME / IMAGE_NAME", "rbd persistent-cache flush pool1/image1", "rbd persistent-cache invalidate POOL_NAME / IMAGE_NAME", "rbd persistent-cache invalidate pool1/image1", "ceph mgr module ls { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", <-- \"status\", \"telemetry\", \"volumes\" }", "[user@mon ~]USD rbd perf image iotop", "[user@mon ~]USD rbd perf image iostat" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_guide/ceph-block-devices
Chapter 8. Managing user groups using Ansible playbooks
Chapter 8. Managing user groups using Ansible playbooks This section introduces user group management using Ansible playbooks. A user group is a set of users with common privileges, password policies, and other characteristics. A user group in Identity Management (IdM) can include: IdM users other IdM user groups external users, which are users that exist outside of IdM The section includes the following topics: The different group types in IdM Direct and indirect group members Ensuring the presence of IdM groups and group members using Ansible playbooks Using Ansible to enable AD users to administer IdM Ensuring the presence of member managers in IDM user groups using Ansible playbooks Ensuring the absence of member managers in IDM user groups using Ansible playbooks 8.1. The different group types in IdM IdM supports the following types of groups: POSIX groups (the default) POSIX groups support Linux POSIX attributes for their members. Note that groups that interact with Active Directory cannot use POSIX attributes. POSIX attributes identify users as separate entities. Examples of POSIX attributes relevant to users include uidNumber , a user number (UID), and gidNumber , a group number (GID). Non-POSIX groups Non-POSIX groups do not support POSIX attributes. For example, these groups do not have a GID defined. All members of this type of group must belong to the IdM domain. External groups Use external groups to add group members that exist in an identity store outside of the IdM domain, such as: A local system An Active Directory domain A directory service External groups do not support POSIX attributes. For example, these groups do not have a GID defined. Table 8.1. User groups created by default Group name Default group members ipausers All IdM users admins Users with administrative privileges, including the default admin user editors This is a legacy group that no longer has any special privileges trust admins Users with privileges to manage the Active Directory trusts When you add a user to a user group, the user gains the privileges and policies associated with the group. For example, to grant administrative privileges to a user, add the user to the admins group. Warning Do not delete the admins group. As admins is a pre-defined group required by IdM, this operation causes problems with certain commands. In addition, IdM creates user private groups by default whenever a new user is created in IdM. For more information about private groups, see Adding users without a private group . 8.2. Direct and indirect group members User group attributes in IdM apply to both direct and indirect members: when group B is a member of group A, all users in group B are considered indirect members of group A. For example, in the following diagram: User 1 and User 2 are direct members of group A. User 3, User 4, and User 5 are indirect members of group A. Figure 8.1. Direct and Indirect Group Membership If you set a password policy for user group A, the policy also applies to all users in user group B. 8.3. Ensuring the presence of IdM groups and group members using Ansible playbooks The following procedure describes ensuring the presence of IdM groups and group members - both users and user groups - using an Ansible playbook. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The users you want to reference in your Ansible playbook exist in IdM. For details on ensuring the presence of users using Ansible, see Managing user accounts using Ansible playbooks . Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group information: --- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: ops group: - sysops - appops Run the playbook: Verification You can verify if the ops group contains sysops and appops as direct members and idm_user as an indirect member by using the ipa group-show command: Log into ipaserver as administrator: Display information about ops : The appops and sysops groups - the latter including the idm_user user - exist in IdM. Additional resources See the /usr/share/doc/ansible-freeipa/README-group.md Markdown file. 8.4. Using Ansible to add multiple IdM groups in a single task You can use the ansible-freeipa ipagroup module to add, modify, and delete multiple Identity Management (IdM) user groups with a single Ansible task. For that, use the groups option of the ipagroup module. Using the groups option, you can also specify multiple group variables that only apply to a particular group. Define this group by the name variable, which is the only mandatory variable for the groups option. Complete this procedure to ensure the presence of the sysops and the appops groups in IdM in a single task. Define the sysops group as a nonposix group and the appops group as an external group. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server in the ~/ MyPlaybooks / directory. You are using RHEL 9.3 and later. You have stored your ipaadmin_password in the secret.yml Ansible vault. Procedure Create your Ansible playbook file add-nonposix-and-external-groups.yml with the following content: Run the playbook: Additional resources The group module in ansible-freeipa upstream docs 8.5. Using Ansible to enable AD users to administer IdM Follow this procedure to use an Ansible playbook to ensure that a user ID override is present in an Identity Management (IdM) group. The user ID override is the override of an Active Directory (AD) user that you created in the Default Trust View after you established a trust with AD. As a result of running the playbook, an AD user, for example an AD administrator, is able to fully administer IdM without having two different accounts and passwords. Prerequisites You know the IdM admin password. You have installed a trust with AD . The user ID override of the AD user already exists in IdM. If it does not, create it with the ipa idoverrideuser-add 'default trust view' [email protected] command. The group to which you are adding the user ID override already exists in IdM . You are using the 4.8.7 version of IdM or later. To view the version of IdM you have installed on your server, enter ipa --version . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create an add-useridoverride-to-group.yml playbook with the following content: In the example: Secret123 is the IdM admin password. admins is the name of the IdM POSIX group to which you are adding the [email protected] ID override. Members of this group have full administrator privileges. [email protected] is the user ID override of an AD administrator. The user is stored in the AD domain with which a trust has been established. Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources ID overrides for AD users /usr/share/doc/ansible-freeipa/README-group.md /usr/share/doc/ansible-freeipa/playbooks/user Using ID views in Active Directory environments Enabling AD users to administer IdM 8.6. Ensuring the presence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the presence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the user or group you are adding as member managers and the name of the group you want them to manage. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: Run the playbook: Verification You can verify if the group_a group contains test as a member manager and group_admins is a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about managergroup1 : Additional resources See ipa host-add-member-manager --help . See the ipa man page on your system. 8.7. Ensuring the absence of member managers in IdM user groups using Ansible playbooks The following procedure describes ensuring the absence of IdM member managers - both users and user groups - using an Ansible playbook. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You must have the name of the existing member manager user or group you are removing and the name of the group they are managing. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the necessary user and group member management information: --- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: "{{ ipaadmin_password }}" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent Run the playbook: Verification You can verify if the group_a group does not contain test as a member manager and group_admins as a member manager of group_a by using the ipa group-show command: Log into ipaserver as administrator: Display information about group_a: Additional resources See ipa host-remove-member-manager --help . See the ipa man page on your system.
[ "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops group: - sysops - appops", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-group-members.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show ops Group name: ops GID: 1234 Member groups: sysops, appops Indirect Member users: idm_user", "--- - name: Playbook to add nonposix and external groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add nonposix group sysops and external group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" groups: - name: sysops nonposix: true - name: appops external: true", "ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/add-nonposix-and-external-groups.yml", "cd ~/ MyPlaybooks /", "--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]", "ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure user test is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test - name: Ensure group_admins is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_group: group_admins", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-user-groups.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test", "[ipaserver] server.idm.example.com", "--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent", "ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-are-absent.yml", "ssh [email protected] Password: [admin@server /]USD", "ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/managing-user-groups-using-ansible-playbooks_using-ansible-to-install-and-manage-identity-management
Chapter 3. Creating an IBM Power Virtual Server workspace
Chapter 3. Creating an IBM Power Virtual Server workspace 3.1. Creating an IBM Power Virtual Server workspace Use the following procedure to create an IBM Power(R) Virtual Server workspace. Procedure To create an IBM Power(R) Virtual Server workspace, complete step 1 to step 5 from the IBM Cloud(R) documentation for Creating an IBM Power(R) Virtual Server . After it has finished provisioning, retrieve the 32-character alphanumeric Globally Unique Identifier (GUID) of your new workspace by entering the following command: USD ibmcloud resource service-instance <workspace name> 3.2. steps Installing a cluster on IBM Power(R) Virtual Server with customizations
[ "ibmcloud resource service-instance <workspace name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_power_virtual_server/creating-ibm-power-vs-workspace
Using source-to-image for OpenShift with Red Hat build of OpenJDK 8
Using source-to-image for OpenShift with Red Hat build of OpenJDK 8 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_8/index
Object Gateway with LDAP and AD Guide
Object Gateway with LDAP and AD Guide Red Hat Ceph Storage 4 Configuring Ceph Object Gateway to use LDAP and AD to authenticate object gateway users. Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_with_ldap_and_ad_guide/index
Chapter 22. How is the subscription threshold calculated?
Chapter 22. How is the subscription threshold calculated? In the subscriptions service, the usage and utilization graph for most product pages contains a subscription threshold. This line shows the maximum capacity of similar subscriptions across all of your contracts. Note Some product pages do not show a subscription threshold on the graph. For a product page that includes pay-as-you-go On-Demand subscriptions, that graph does not display a subscription threshold because of the characteristics of that subscription type. For an account that includes any subscription with a unit of measurement (UoM) of "Unlimited" as part of the terms, the graph for any product page that includes this subscription does not display a subscription threshold. If filtering is used to exclude this subscription from the views, the graph will display a subscription threshold for the filtered data. To measure the maximum capacity of an organization's account and plot the subscription threshold line in the graph, the subscriptions service does the following steps: Accesses the Red Hat internal subscription services to gather subscription-related contract data for the account. Analyzes every subscription in the account, including each SKU (stock-keeping unit) that was purchased and the amount of each SKU that was purchased. Determines which products are provided in each SKU that is found. Calculates the maximum amount of technology that is provided by a subscription by multiplying the amount of technology that a SKU allows by the number of that SKU that was purchased in the subscription. The amount of technology that a SKU allows is the unit of measurement for the SKU multiplied by the number of these units (the limit) that the SKU provides. Adds the maximum amount of technology for every subscription to determine the subscription threshold that appears on the graph for every product or product portfolio. Analyzes the available subscription attributes data (also known as system purpose data or subscription settings) to enable filtering of that data with the filters in the subscriptions service.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/con-trbl-how-subscription-threshold-calculated_assembly-troubleshooting-common-questions-ctxt
Chapter 11. Dashboard (horizon) Parameters
Chapter 11. Dashboard (horizon) Parameters You can modify the horizon service with dashboard parameters. Parameter Description HorizonAllowedHosts A list of IP/Hostname for the server OpenStack Dashboard (horizon) is running on. Used for header checks. The default value is * . HorizonCustomizationModule OpenStack Dashboard (horizon) has a global overrides mechanism available to perform customizations. HorizonDomainChoices Specifies available domains to choose from. We expect an array of hashes, and the hashes should have two items each (name, display) containing OpenStack Identity (keystone) domain name and a human-readable description of the domain respectively. HorizonHelpURL On top of dashboard there is a Help button. This button could be used to re-direct user to vendor documentation or dedicated help portal. The default value is http://docs.openstack.org . HorizonPasswordValidator Regex for password validation. HorizonPasswordValidatorHelp Help text for password validation. HorizonSecret Secret key for the webserver. HorizonSecureCookies Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in OpenStack Dashboard (horizon). The default value is false . HorizonSessionTimeout Set session timeout for horizon in seconds. The default value is 1800 . HorizonVhostExtraParams Extra parameters for OpenStack Dashboard (horizon) vhost configuration. The default value is {'add_listen': 'true', 'priority': '10', 'access_log_format': '%a %l %u %t \\"%r\\" %>s %b \\"%%{}{Referer}i\\" \\"%%{}{User-Agent}i\\"', 'options': ['FollowSymLinks', 'MultiViews']} . MemcachedIPv6 Enable IPv6 features in Memcached. The default value is false . TimeZone The timezone to be set on the overcloud. The default value is UTC . WebSSOChoices Specifies the list of SSO authentication choices to present. Each item is a list of an SSO choice identifier and a display message. The default value is [['OIDC', 'OpenID Connect']] . WebSSOEnable Enable support for Web Single Sign-On. The default value is false . WebSSOIDPMapping Specifies a mapping from SSO authentication choice to identity provider and protocol. The identity provider and protocol names must match the resources defined in keystone. The default value is {'OIDC': ['myidp', 'openid']} . WebSSOInitialChoice The initial authentication choice to select by default. The default value is OIDC .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_dashboard-horizon-parameters_overcloud_parameters
Chapter 8. Assigning a Puppet class to an individual host
Chapter 8. Assigning a Puppet class to an individual host Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Locate the host you want to add the ntp Puppet class to and click Edit . Select the Puppet ENC tab and look for the ntp class. Click the + symbol to ntp to add the ntp submodule to the list of included classes . Click Submit to save your changes. Tip If the Puppet classes tab of an individual host is empty, check if it is assigned to the proper Puppet environment. Verify the Puppet configuration. Navigate to Hosts > All Hosts and select the host. From the top overflow menu, select Legacy UI . Under Details , click Puppet YAML . This produces output similar as follows: --- parameters: // shortened YAML output classes: ntp: servers: '["0.de.pool.ntp.org","1.de.pool.ntp.org","2.de.pool.ntp.org","3.de.pool.ntp.org"]' environment: production ... Verify the ntp configuration. Connect to your host using SSH and check the content of /etc/ntp.conf . This example assumes your host is running CentOS 7 . Other operating systems may store the ntp config file in a different path. Tip You may need to run the Puppet agent on your host by executing the following command: Running the following command on the host checks which ntp servers are used for clock synchronization: This returns output similar as follows: You now have a working ntp module which you can add to a host or group of hosts to roll out your ntp configuration automatically.
[ "--- parameters: // shortened YAML output classes: ntp: servers: '[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]' environment: production", "puppet agent -t", "cat /etc/ntp.conf", "ntp.conf: Managed by puppet. server 0.de.pool.ntp.org server 1.de.pool.ntp.org server 2.de.pool.ntp.org server 3.de.pool.ntp.org" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_puppet_integration/assigning-a-puppet-class-to-an-individual-host_managing-configurations-puppet
21.2.3. SELinux Utilities
21.2.3. SELinux Utilities The following are some of the most commonly used SELinux utilities: /usr/bin/setenforce - Modifies in real-time the mode SELinux is running. By executing setenforce 1 , SELinux is put in enforcing mode. By executing setenforce 0 , SELinux is put in permissive mode. To actually disable SELinux, you need to either set the parameter in /etc/sysconfig/selinux or pass the parameter selinux=0 to the kernel, either in /etc/grub.conf or at boot time. /usr/bin/sestatus -v - Gets the detailed status of a system running SELinux. The following example shows an excerpt of sestatus output: /usr/bin/newrole - Runs a new shell in a new context, or role. Policy must allow the transition to the new role. /sbin/restorecon - Sets the security context of one or more files by marking the extended attributes with the appropriate file or security context. /sbin/fixfiles - Checks or corrects the security context database on the file system. Refer to the man page associated with these utilities for more information. For more information on all binary utilities available, refer to the setools or policycoreutils package contents by running rpm -ql <package-name> , where <package-name> is the name of the specific package.
[ "SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Policy version: 18" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-SELinux-files-utils
Chapter 39. Monitoring network activity with SystemTap
Chapter 39. Monitoring network activity with SystemTap You can use helpful example SystemTap scripts available in the /usr/share/systemtap/testsuite/systemtap.examples/ directory, upon installing the systemtap-testsuite package, to monitor and investigate the network activity of your system. 39.1. Profiling network activity with SystemTap You can use the nettop.stp example SystemTap script to profile network activity. The script tracks which processes are generating network traffic on the system, and provides the following information about each process: PID The ID of the listed process. UID User ID. A user ID of 0 refers to the root user. DEV Which ethernet device the process used to send or receive data (for example, eth0, eth1). XMIT_PK The number of packets transmitted by the process. RECV_PK The number of packets received by the process. XMIT_KB The amount of data sent by the process, in kilobytes. RECV_KB The amount of data received by the service, in kilobytes. Prerequisites You have installed SystemTap as described in Installing SystemTap . Procedure Run the nettop.stp script: The nettop.stp script provides network profile sampling every 5 seconds. Output of the nettop.stp script looks similar to the following: 39.2. Tracing functions called in network socket code with SystemTap You can use the socket-trace.stp example SystemTap script to trace functions called from the kernel's net/socket.c file. This helps you identify, in finer detail, how each process interacts with the network at the kernel level. Prerequisites You have installed SystemTap as described in Installing SystemTap . Procedure Run the socket-trace.stp script: A 3-second excerpt of the output of the socket-trace.stp script looks similar to the following: 39.3. Monitoring network packet drops with SystemTap The network stack in Linux can discard packets for various reasons. Some Linux kernels include a tracepoint, kernel.trace("kfree_skb") , which tracks where packets are discarded. The dropwatch.stp SystemTap script uses kernel.trace("kfree_skb") to trace packet discards; the script summarizes what locations discard packets in every 5-second interval. Prerequisites You have installed SystemTap as described in Installing SystemTap . Procedure Run the dropwatch.stp script: Running the dropwatch.stp script for 15 seconds results in output similar to the following: Note To make the location of packet drops more meaningful, see the /boot/System.map-USD(uname -r) file. This file lists the starting addresses for each function, enabling you to map the addresses in the output of the dropwatch.stp script to a specific function name. Given the following snippet of the /boot/System.map-USD(uname -r) file, the address 0xffffffff8024cd0f maps to the function unix_stream_recvmsg and the address 0xffffffff8044b472 maps to the function arp_rcv :
[ "stap --example nettop.stp", "[...] PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 5 0 0 swapper 11178 0 eth0 2 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 2886 4 eth0 79 0 5 0 cups-polld 11362 0 eth0 0 61 0 5 firefox 0 0 eth0 3 32 0 3 swapper 2886 4 lo 4 4 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 0 6 0 0 swapper 2886 4 lo 2 2 0 0 cups-polld 11178 0 eth0 3 0 0 0 synergyc 3611 0 eth0 0 1 0 0 Xorg PID UID DEV XMIT_PK RECV_PK XMIT_KB RECV_KB COMMAND 0 0 eth0 3 42 0 2 swapper 11178 0 eth0 43 1 3 0 synergyc 11362 0 eth0 0 7 0 0 firefox 3897 0 eth0 0 1 0 0 multiload-apple", "stap --example socket-trace.stp", "[...] 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 Xorg(3611): -> sock_poll 3 Xorg(3611): <- sock_poll 0 gnome-terminal(11106): -> sock_poll 5 gnome-terminal(11106): <- sock_poll 0 scim-bridge(3883): -> sock_poll 3 scim-bridge(3883): <- sock_poll 0 scim-bridge(3883): -> sys_socketcall 4 scim-bridge(3883): -> sys_recv 8 scim-bridge(3883): -> sys_recvfrom 12 scim-bridge(3883):-> sock_from_file 16 scim-bridge(3883):<- sock_from_file 20 scim-bridge(3883):-> sock_recvmsg 24 scim-bridge(3883):<- sock_recvmsg 28 scim-bridge(3883): <- sys_recvfrom 31 scim-bridge(3883): <- sys_recv 35 scim-bridge(3883): <- sys_socketcall [...]", "stap --example dropwatch.stp", "Monitoring for dropped packets 51 packets dropped at location 0xffffffff8024cd0f 2 packets dropped at location 0xffffffff8044b472 51 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 97 packets dropped at location 0xffffffff8024cd0f 1 packets dropped at location 0xffffffff8044b472 Stopping dropped packet monitor", "[...] ffffffff8024c5cd T unlock_new_inode ffffffff8024c5da t unix_stream_sendmsg ffffffff8024c920 t unix_stream_recvmsg ffffffff8024cea1 t udp_v4_lookup_longway [...] ffffffff8044addc t arp_process ffffffff8044b360 t arp_rcv ffffffff8044b487 t parp_redo ffffffff8044b48c t arp_solicit [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/monitoring-network-activity-with-systemtap_monitoring-and-managing-system-status-and-performance
Chapter 7. Known issues
Chapter 7. Known issues There are no known issues for this release.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_3_release_notes/known_issues
Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator
Chapter 7. Deploying a RHOSP hyperconverged infrastructure (HCI) with director Operator You can use director Operator (OSPdO) to deploy an overcloud with hyperconverged infrastructure (HCI). An overcloud with HCI colocates Compute and Red Hat Ceph Storage OSD services on the same nodes. 7.1. Prerequisites Your Compute HCI nodes require extra disks to use as OSDs. You have installed and prepared OSPdO on an operational Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Installing and preparing director Operator . You have created the overcloud networks by using the OpenStackNetConfig custom resource definition (CRD), including the control plane and any isolated networks. For more information, see Creating networks with director Operator . You have created ConfigMaps to store any custom heat templates and environment files for your overcloud. For more information, see Customizing the overcloud with director Operator . You have created a control plane and bare-metal Compute nodes for your overcloud. For more information, see Creating overcloud nodes with director Operator . You have created and applied an OpenStackConfigGenerator cusstom resource to render Ansible playbooks for overcloud configuration. 7.2. Creating a roles_data.yaml file with the Compute HCI role for director Operator To include configuration for the Compute HCI role in your overcloud, you must include the Compute HCI role in the roles_data.yaml file that you include with your overcloud deployment. Note Ensure that you use roles_data.yaml as the file name. Procedure Access the remote shell for openstackclient : Unset the OS_CLOUD environment variable: Change to the cloud-admin directory: Generate a new roles_data.yaml file with the Controller and ComputeHCI roles: Exit the openstackclient pod: Copy the custom roles_data.yaml file from the openstackclient pod to your custom templates directory: Additional resources Creating a roles_data file steps Configuring HCI networking in director Operator 7.3. Configuring HCI networking in director Operator Create directories on your workstation to store your custom templates and environment files, and configure the NIC templates for your Compute HCI role. Procedure Create a directory for your custom templates: Create a custom template file named multiple_nics_vlans_dvr.j2 in your custom_templates directory. Add configuration for the NICs of your bare-metal nodes to your multiple_nics_vlans_dvr.j2 file. For an example NIC configuration file, see Custom NIC heat template for HCI Compute nodes . Create a directory for your custom environment files: Map the NIC template for your overcloud role in the network-environment.yaml environment file in your custom_environment_files directory: Additional resources Custom network interface templates steps Adding custom templates to the overcloud configuration 7.4. Custom NIC heat template for HCI Compute nodes The following example is a heat template that contains NIC configuration for the HCI Compute bare metal nodes. The configuration in the heat template maps the networks to the following bridges and interfaces: Networks Bridge Interface Control Plane, Storage, Internal API N/A nic3 External, Tenant br-ex nic4 To use the following template in your deployment, copy the example to multiple_nics_vlans_dvr.j2 in your custom_templates directory on your workstation. You can modify this configuration for the NIC configuration of your bare-metal nodes. Example 7.5. Adding custom templates to the overcloud configuration Director Operator (OSPdO) converts a core set of overcloud heat templates into Ansible playbooks that you apply to provisioned nodes when you are ready to configure the Red Hat OpenStack Platform (RHOSP) software on each node. To add your own custom heat templates and custom roles file into the overcloud deployment, you must archive the template files into a tarball file and include the binary contents of the tarball file in an OpenShift ConfigMap object named tripleo-tarball-config . This tarball file can contain complex directory structures to extend the core set of templates. OSPdO extracts the files and directories from the tarball file into the same directory as the core set of heat templates. If any of your custom templates have the same name as a template in the core collection, the custom template overrides the core template. Note All references in the environment files must be relative to the TripleO heat templates where the tarball is extracted. Prerequisites The custom overcloud templates that you want to apply to provisioned nodes. Procedure Navigate to the location of your custom templates: Archive the templates into a gzipped tarball: Create the tripleo-tarball-config ConfigMap CR and use the tarball as data: Verify that the ConfigMap CR is created: Additional resources Creating and using config maps Understanding heat templates steps Adding custom environment files to the overcloud configuration 7.6. Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator The following example is an environment file that contains Red Hat Ceph Storage configuration for the Compute HCI nodes. This configuration maps the OSD nodes to the sdb , sdc , and sdd devices and enables HCI with the is_hci option. Note You can modify this configuration to suit the storage configuration of your bare-metal nodes. Use the "Ceph Placement Groups (PGs) per Pool Calculator" to determine the value for the CephPoolDefaultPgNum parameter. To use this template in your deployment, copy the contents of the example to compute-hci.yaml in your custom_environment_files directory on your workstation. 7.7. Adding custom environment files to the overcloud configuration To enable features or set parameters in the overcloud, you must include environment files with your overcloud deployment. Director Operator (OSPdO) uses a ConfigMap object named heat-env-config to store and retrieve environment files. The ConfigMap object stores the environment files in the following format: For example, the following ConfigMap contains two environment files: Upload a set of custom environment files from a directory to a ConfigMap object that you can include as a part of your overcloud deployment. Prerequisites The custom environment files for your overcloud deployment. Procedure Create the heat-env-config ConfigMap object: Replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries. Verify that the heat-env-config ConfigMap object contains all the required environment files: 7.8. Creating HCI Compute nodes and deploying the overcloud Compute nodes provide computing resources to your Red Hat OpenStack Platform (RHOSP) environment. You must have at least one Compute node in your overcloud and you can scale the number of Compute nodes after deployment. Define an OpenStackBaremetalSet custom resource (CR) to create Compute nodes from bare-metal machines that the Red Hat OpenShift Container Platform (RHOCP) manages. Tip Use the following commands to view the OpenStackBareMetalSet CRD definition and specification schema: Prerequisites You have used the OpenStackNetConfig CR to create a control plane network and any additional isolated networks. You have created a control plane with the OpenStackControlPlane CRD. Procedure Create a file named openstack-hcicompute.yaml on your workstation. Include the resource specification for the HCI Compute nodes. For example, the specification for 3 HCI Compute nodes is as follows: 1 The name of the HCI Compute node bare metal set, for example, computehci . 2 The OSPdO namespace, for example, openstack . 3 The configuration for the HCI Compute nodes. 4 Optional: The Secret resource that provides root access on each node to users with the password. Save the openstack-hcicompute.yaml file. Create the HCI Compute nodes: Verify that the resource for the HCI Compute nodes is created: To verify the creation of the HCI Compute nodes, view the bare-metal machines that RHOCP manages: Create the Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD. For more information, see Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD . Register the operating system of your overcloud. For more information, see Registering the operating system of your overcloud . Apply the overcloud configuration. For more information, see Applying overcloud configuration with director Operator .
[ "oc rsh -n openstack openstackclient", "unset OS_CLOUD", "cd /home/cloud-admin/", "openstack overcloud roles generate -o roles_data.yaml Controller ComputeHCI", "exit", "oc cp openstackclient:/home/cloud-admin/roles_data.yaml custom_templates/roles_data.yaml -n openstack", "mkdir custom_templates", "mkdir custom_environment_files", "parameter_defaults: ComputeHCINetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2'", "{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: BMH provisioning interface used for ctlplane - type: interface name: nic1 mtu: 1500 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_subnet_cidr }} routes: {{ ctlplane_host_routes }} Disable OCP cluster interface - type: interface name: nic2 mtu: 1500 use_dhcp: false {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network == 'External' %} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} dns_servers: {{ ctlplane_dns_nameservers }} use_dhcp: false {% if network in role_networks %} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} members: - type: interface name: nic3 mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} primary: true {% endif %} {% endfor %} - type: ovs_bridge name: br-tenant mtu: {{ min_viable_mtu }} use_dhcp: false members: - type: interface name: nic4 mtu: {{ min_viable_mtu }} use_dhcp: false primary: true {% for network in networks_all if network not in networks_skip_config|default([]) %} {% if network not in [\"External\"] and network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endif %} {% endfor %}", "cd ~/custom_templates", "tar -cvzf custom-config.tar.gz *.yaml", "oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack", "oc get configmap/tripleo-tarball-config -n openstack", "resource_registry: OS::TripleO::Services::CephMgr: deployment/cephadm/ceph-mgr.yaml OS::TripleO::Services::CephMon: deployment/cephadm/ceph-mon.yaml OS::TripleO::Services::CephOSD: deployment/cephadm/ceph-osd.yaml OS::TripleO::Services::CephClient: deployment/cephadm/ceph-client.yaml parameter_defaults: CephDynamicSpec: true CephSpecFqdn: true CephConfigOverrides: rgw_swift_enforce_content_length: true rgw_swift_versioning_enabled: true osd: osd_memory_target_autotune: true osd_numa_auto_affinity: true mgr: mgr/cephadm/autotune_memory_target_ratio: 0.2 CinderEnableIscsiBackend: false CinderEnableRbdBackend: true CinderBackupBackend: ceph CinderEnableNfsBackend: false NovaEnableRbdBackend: true GlanceBackend: rbd CinderRbdPoolName: \"volumes\" NovaRbdPoolName: \"vms\" GlanceRbdPoolName: \"images\" CephPoolDefaultPgNum: 32 CephPoolDefaultSize: 2", "data: <environment_file_name>: |+ <environment_file_contents>", "data: network_environment.yaml: |+ parameter_defaults: ComputeNetworkConfigTemplate: 'multiple_nics_vlans_dvr.j2' cloud_name.yaml: |+ parameter_defaults: CloudDomain: ocp4.example.com CloudName: overcloud.ocp4.example.com CloudNameInternal: overcloud.internalapi.ocp4.example.com CloudNameStorage: overcloud.storage.ocp4.example.com CloudNameStorageManagement: overcloud.storagemgmt.ocp4.example.com CloudNameCtlplane: overcloud.ctlplane.ocp4.example.com", "oc create configmap -n openstack heat-env-config --from-file=~/<dir_custom_environment_files>/ --dry-run=client -o yaml | oc apply -f -", "oc get configmap/heat-env-config -n openstack", "oc describe crd openstackbaremetalset oc explain openstackbaremetalset.spec", "apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackBaremetalSet metadata: name: computehci 1 namespace: openstack 2 spec: 3 count: 3 baseImageUrl: http://<source_host>/rhel-9.2-x86_64-kvm.qcow2 deploymentSSHSecret: osp-controlplane-ssh-keys ctlplaneInterface: enp8s0 networks: - ctlplane - internal_api - tenant - storage - storage_mgmt roleName: ComputeHCI passwordSecret: userpassword 4", "oc create -f openstack-hcicompute.yaml -n openstack", "oc get openstackbaremetalset/computehci -n openstack", "oc get baremetalhosts -n openshift-machine-api" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/assembly_deploying-a-RHOSP-hyperconverged-infrastructure-with-director-operator
Appendix C. Job template examples and extensions
Appendix C. Job template examples and extensions Use this section as a reference to help modify, customize, and extend your job templates to suit your requirements. C.1. Customizing job templates When creating a job template, you can include an existing template in the template editor field. This way you can combine templates, or create more specific templates from the general ones. The following template combines default templates to install and start the nginx service on clients: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %> The above template specifies parameter values for the rendered template directly. It is also possible to use the input() method to allow users to define input for the rendered template on job execution. For example, you can use the following syntax: <%= render_template 'Package Action - SSH Default', :action => 'install', :package => input("package") %> With the above template, you have to import the parameter definition from the rendered template. To do so, navigate to the Jobs tab, click Add Foreign Input Set , and select the rendered template from the Target template list. You can import all parameters or specify a comma separated list. C.2. Default job template categories Job template category Description Packages Templates for performing package related actions. Install, update, and remove actions are included by default. Puppet Templates for executing Puppet runs on target hosts. Power Templates for performing power related actions. Restart and shutdown actions are included by default. Commands Templates for executing custom commands on remote hosts. Services Templates for performing service related actions. Start, stop, restart, and status actions are included by default. Katello Templates for performing content related actions. These templates are used mainly from different parts of the Satellite web UI (for example bulk actions UI for content hosts), but can be used separately to perform operations such as errata installation. C.3. Example restorecon template This example shows how to create a template called Run Command - restorecon that restores the default SELinux context for all files in the selected directory on target hosts. Procedure In the Satellite web UI, navigate to Hosts > Templates > Job templates . Click New Job Template . Enter Run Command - restorecon in the Name field. Select Default to make the template available to all organizations. Add the following text to the template editor: restorecon -RvF <%= input("directory") %> The <%= input("directory") %> string is replaced by a user-defined directory during job invocation. On the Job tab, set Job category to Commands . Click Add Input to allow job customization. Enter directory to the Name field. The input name must match the value specified in the template editor. Click Required so that the command cannot be executed without the user specified parameter. Select User input from the Input type list. Enter a description to be shown during job invocation, for example Target directory for restorecon . Click Submit . For more information, see Executing a restorecon Template on Multiple Hosts in Managing hosts . C.4. Rendering a restorecon template This example shows how to create a template derived from the Run command - restorecon template created in Example restorecon Template . This template does not require user input on job execution, it will restore the SELinux context in all files under the /home/ directory on target hosts. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Run Command - restorecon", :directory => "/home") %> C.5. Executing a restorecon template on multiple hosts This example shows how to run a job based on the template created in Example restorecon Template on multiple hosts. The job restores the SELinux context in all files under the /home/ directory. Procedure In the Satellite web UI, navigate to Monitor > Jobs and click Run job . Select Commands as Job category and Run Command - restorecon as Job template and click . Select the hosts on which you want to run the job. If you do not select any hosts, the job will run on all hosts you can see in the current context. In the directory field, provide a directory, for example /home , and click . Optional: To configure advanced settings for the job, fill in the Advanced fields . To learn more about advanced settings, see Section 13.23, "Advanced settings in the job wizard" . When you are done entering the advanced settings or if it is not required, click . Schedule time for the job. To execute the job immediately, keep the pre-selected Immediate execution . To execute the job in future time, select Future execution . To execute the job on regular basis, select Recurring execution . Optional: If you selected future or recurring execution, select the Query type , otherwise click . Static query means that the job executes on the exact list of hosts that you provided. Dynamic query means that the list of hosts is evaluated just before the job is executed. If you entered the list of hosts based on some filter, the results can be different from when you first used that filter. Click after you have selected the query type. Optional: If you selected future or recurring execution, provide additional details: For Future execution , enter the Starts at date and time. You also have the option to select the Starts before date and time. If the job cannot start before that time, it will be canceled. For Recurring execution , select the start date and time, frequency, and condition for ending the recurring job. You can choose the recurrence to never end, end at a certain time, or end after a given number of repetitions. You can also add Purpose - a special label for tracking the job. There can only be one active job with a given purpose at a time. Click after you have entered the required information. Review job details. You have the option to return to any part of the job wizard and edit the information. Click Submit to schedule the job for execution. C.6. Including power actions in templates This example shows how to set up a job template for performing power actions, such as reboot. This procedure prevents Satellite from interpreting the disconnect exception upon reboot as an error, and consequently, remote execution of the job works correctly. Create a new template as described in Setting up Job Templates , and specify the following string in the template editor: <%= render_template("Power Action - SSH Default", :action => "restart") %>
[ "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => 'nginx' %> <%= render_template 'Service Action - SSH Default', :action => 'start', :service_name => 'nginx' %>", "<%= render_template 'Package Action - SSH Default', :action => 'install', :package => input(\"package\") %>", "restorecon -RvF <%= input(\"directory\") %>", "<%= render_template(\"Run Command - restorecon\", :directory => \"/home\") %>", "<%= render_template(\"Power Action - SSH Default\", :action => \"restart\") %>" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/job_template_examples_and_extensions_managing-hosts
Chapter 13. Setting Automated Jobs
Chapter 13. Setting Automated Jobs The Certificate System provides a customizable Job Scheduler that supports various mechanisms for scheduling cron jobs. This chapter explains how to configure Certificate System to use specific job plug-in modules for accomplishing jobs. 13.1. About Automated Jobs The Certificate Manager Console includes a Job Scheduler option that can execute specific jobs at specified times. The Job Scheduler is similar to a traditional Unix cron daemon; it takes registered cron jobs and executes them at a pre-configured date and time. If configured, the scheduler checks at specified intervals for jobs waiting to be executed; if the specified execution time has arrived, the scheduler initiates the job automatically. Jobs are implemented as Java TM classes, which are then registered with Certificate System as plug-in modules. One implementation of a job module can be used to configure multiple instances of the job. Each instance must have a unique name (an alphanumeric string with no spaces) and can contain different input parameter values to apply to different jobs. 13.1.1. Setting up Automated Jobs The automated jobs feature is set up by doing the following: Enabling and configuring the Job Scheduler; see Section 13.2, "Setting up the Job Scheduler" for more information. Enabling and configuring the job modules and setting preferences for those job modules; see Section 13.3, "Setting up Specific Jobs" for more information. Customizing the email notification messages sent with these jobs by changing the templates associated with the types of notification. The message contents are composed of both plain text messages and HTML messages; the appearance is modified by changing the HTML templates. See Section 12.3.1, "Customizing CA Notification Messages" for more information. 13.1.2. Types of Automated Jobs The types of automated jobs are RenewalNotificationJob , RequestInQueueJob , PublishCertsJob , and UnpublishExpiredJob . One instance of each job type is created when Certificate System is deployed. 13.1.2.1. certRenewalNotifier (RenewalNotificationJob) The certRenewalNotifier job checks for certificates that are about to expire in the internal database. When it finds one, it automatically emails the certificate's owner and continues sending email reminders for a configured period of time or until the certificate is replaced. The job collects a summary of all renewal notifications and mails the summary to the configured agents or administrators. The job determines the email address to send the notification using an email resolver. By default, the email address is found in the certificate itself or in the certificate's associated enrollment request. 13.1.2.2. requestInQueueNotifier (RequestInQueueJob) The requestInQueueNotifier job checks the status of the request queue at pre-configured time intervals. If any deferred enrollment requests are waiting in the queue, the job constructs an email message summarizing its findings and sends it to the specified agents. 13.1.2.3. publishCerts (PublishCertsJob) The publishCerts job checks for any new certificates that have been added to the publishing directory that have not yet been published. When these new certificates are added, they are automatically published to an LDAP directory or file by the publishCerts job. Note Most of the time, publishers immediately publish any certificates that are created matching their rules to the appropriate publishing directory. If a certificate is successfully published when it is created, then the publishCerts job will not re-publish the certificate. Therefore, the new certificate will not be listed in the job summary report, since the summary only lists certificates published by the publishCerts job. 13.1.2.4. unpublishExpiredCerts (UnpublishExpiredJob) Expired certificates are not automatically removed from the publishing directory. If a Certificate Manager is configured to publish certificates to an LDAP directory, over time the directory will contain expired certificates. The unpublishExpiredCerts job checks for certificates that have expired and are still marked as published in the internal database at the configured time interval. The job connects to the publishing directory and deletes those certificates; it then marks those certificates as unpublished in the internal database. The job collects a summary of expired certificates that it deleted and mails the summary to the agents or administrators specified by the configuration. Note This job automates removing expired certificates from the directory. Expired certificates can also be removed manually; for more information on this, see Section 9.12, "Updating Certificates and CRLs in a Directory" .
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Automated_Jobs
Chapter 1. Searching for vulnerability information
Chapter 1. Searching for vulnerability information You can use the Trusted Profile Analyzer service to find existing Software Bill of Materials (SBOM), Vulnerability Exploitability eXchange (VEX) documents, and common vulnerability and exposure (CVE) information for Red Hat products and packages. Important Trusted Profile Analyzer managed service provides only information for the following Red Hat products: Red Hat Enterprise Linux Universal Base Image (UBI) versions 8 and 9. The Java Quarkus library. Prerequisites A Red Hat user account to access the Red Hat Hybrid Cloud Console . Procedure Open a web browser. Go to the Application and Data Services home page on the Hybrid Cloud Console. If prompted, log in to the Hybrid Cloud Console with your credentials. On the navigation menu, click Trusted Profile Analyzer . On the Trusted Profile Analyzer home page, click the Subscribe and launch button. A new web browser window opens to the Trusted Profile Analyzer console home page. Note By subscribing, your registered email address goes onto the product mailing list, so you can receive information about new product developments. On the Home page, in the search field, enter your search criteria and click Search . On the search results page, you can filter the results by Red Hat products, download SBOM files, view package vulnerability information, and view any possible remediations. Note The number shown on the Advisories tab is how many times your search criteria made a match. On the Products and containers tab, the number in the Product advisories column shows the number of advisories for that specific product.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/quick_start_guide/searching-for-vulnerability-information_qsg
Chapter 1. What is Red Hat OpenShift Cluster Manager?
Chapter 1. What is Red Hat OpenShift Cluster Manager? Red Hat OpenShift Cluster Manager is a managed service on the Red Hat Hybrid Cloud Console where you can create, operate, and upgrade your Red Hat OpenShift 4 clusters. OpenShift Cluster Manager provides links and steps to install Red Hat OpenShift Container Platform clusters and tools to create Red Hat OpenShift Dedicated and Red Hat OpenShift Service on AWS (ROSA) clusters. From OpenShift Cluster Manager, you can work with all of your organization's OpenShift Container Platform and OpenShift cloud services clusters from a single dashboard. You can gain insights and recommendations for managing your clusters, and you can complete the following tasks: View high level cluster information. Create new clusters. Configure Red Hat subscription services on your clusters. Manage your clusters using other services on the Red Hat Hybrid Cloud Console . Monitor clusters for problems. Access the OpenShift cluster admin console Find information about the latest OpenShift versions. Download tools to use with your clusters. Get support for your clusters and manage your Red Hat support cases. Additional resources OpenShift Container Platform documentation OpenShift Dedicated documentation ROSA documentation Subscriptions documentation Red Hat Insights for OpenShift (Remote health monitoring with connected clusters) documentation 1.1. Getting started with OpenShift Cluster Manager You can use Red Hat OpenShift Cluster Manager to work with your Red Hat OpenShift cloud services and Red Hat OpenShift Container Platform clusters. With OpenShift Cluster Manager you can create, subscribe, and manage different types of OpenShift clusters from a single user interface. Prerequisites You have a Red Hat account. Procedure Enter the following URL in a web browser: Note For details about web browser requirements, see the Browser Support link at the bottom of the Red Hat Hybrid Cloud Console landing page. 1.2. What is the difference between OpenShift Container Platform and OpenShift Dedicated? Red Hat OpenShift Container Platform clusters are self-managed and run on-premises or on a cloud provider. OpenShift Dedicated clusters are managed by Red Hat and run on a cloud provider. OpenShift Container Platform is a self-managed hybrid cloud platform. With OpenShift Container Platform, you can create your clusters on any private or public cloud or bare metal, using your own infrastructure. Red Hat OpenShift Dedicated is a fully managed service for Red Hat OpenShift, which uses Amazon Web Services (AWS) or Google Cloud Platform (GCP). With OpenShift Dedicated, you can run your clusters on the Red Hat managed cloud account, or on your own AWS or GCP cloud provider account. You can use OpenShift Cluster Manager to create and manage your OpenShift Container Platform and OpenShift Dedicated clusters from one dashboard. Additional resources See the Red Hat OpenShift product page to learn more about OpenShift products. See OpenShift deployment methods for more information about the different types of OpenShift deployments. 1.3. OpenShift Cluster Manager with OpenShift Container Platform You can use the OpenShift Cluster Manager user interface to create OpenShift Container Platform clusters and subscribe the clusters to Red Hat for support. OpenShift Cluster Manager provides the installer and instructions to create self-managed clusters on each supported environment for OpenShift Container Platform. You can then view and manage your OpenShift Container Platform clusters in OpenShift Cluster Manager, or log in to the OpenShift Container Platform web console to access and configure your clusters. You can find information about the latest OpenShift Container Platform release versions available, as well as update channels for your clusters from the Releases menu in OpenShift Cluster Manager. For insights about your clusters, use the integrated services within the Red Hat Hybrid Cloud Console such as Red Hat Insights Advisor, Subscriptions, and Cost Management. Additional resources For more information about using OpenShift Container Platform, see the OpenShift Container Platform documentation . 1.4. OpenShift Cluster Manager with OpenShift Dedicated You can use the OpenShift Cluster Manager user interface to create, view, and manage your OpenShift Dedicated clusters. OpenShift Dedicated clusters are managed by Red Hat and are known as managed clusters . You can create OpenShift Dedicated clusters on AWS or Google Cloud Platform, using either the Red Hat managed cloud account or your own cloud provider account. When you use your own cloud provider account, this billing model is referred to as Customer Cloud Subscription (CCS) in OpenShift Cluster Manager. Additional resources For more information about using OpenShift Dedicated and accessing your clusters, see the OpenShift Dedicated documentation . 1.5. OpenShift Cluster Manager with Red Hat OpenShift Service on AWS You can use the OpenShift Cluster Manager user interface to create, view and manage your Red Hat OpenShift Service on AWS (ROSA) clusters. ROSA is a fully-managed OpenShift service, jointly managed and supported by Red Hat and Amazon Web Services (AWS). This service is procured directly from your AWS account. ROSA pricing is consumption based and is billed directly to your AWS account. You can quickly deploy ROSA from OpenShift Cluster Manager or the ROSA CLI. In OpenShift Cluster Manager, you can manage your ROSA cluster and any add-on services for the cluster. Additional resources For more information about working with ROSA clusters, see the ROSA documentation . 1.6. OpenShift Cluster Manager and the Red Hat Hybrid Cloud Console OpenShift Cluster Manager is integrated with the following services hosted on the Red Hat Hybrid Cloud Console , which you can use to gain deeper understanding and manage your OpenShift clusters Insights Advisor for OpenShift Container Platform monitors the health of your OpenShift Container Platform clusters and helps you identify, prioritize, and resolve risks to service availability, fault tolerance, performance, and security. Subscriptions is a service that you can use to monitor your usage and subscription information for your OpenShift clusters. Cost management aggregates and displays the costs of your OpenShift deployment and infrastructure across bare-metal servers, virtual machines, private clouds, and public cloud infrastructure, including Amazon Web Services and Microsoft Azure. You need a Red Hat account to access OpenShift Cluster Manager and the Red Hat Hybrid Cloud Console . You can then deploy an OpenShift cluster in OpenShift Cluster Manager. For greater security, you can use two-factor authentication (2FA) to access OpenShift Cluster Manager and the Red Hat Hybrid Cloud Console . You must enable 2FA in your Red Hat account to use 2FA to access OpenShift Cluster Manager. Organization Administrators can enable 2FA for all users in their organization, or individual users can configure 2FA for their own Red Hat account. To enable 2FA in your Red Hat account or learn more, see the Using Two-Factor Authentication guide. Additional resources See Remote health monitoring with connected clusters for information about Red Hat Insights Advisor for OpenShift Container Platform. See the Subscriptions documentation to learn more about using the subscriptions service in the Red Hat Hybrid Cloud Console . See the Cost management documentation to learn more about about simplifying the management of your OpenShift costs. See the Red Hat Hybrid Cloud Console documentation for more information about using the Red Hat Hybrid Cloud Console and its services. Sign up for a free Red Hat account on the Create a Red Hat Login page. 1.7. Add-on services with your OpenShift cloud services clusters Add-ons are additional services that you can install to your existing Red Hat OpenShift Dedicated and Red Hat OpenShift Service on AWS (ROSA) clusters to enhance cluster capabilities. You can install and manage add-on services from a cluster's Add-ons tab in OpenShift Cluster Manager . Depending on the add-on service, you might need additional Red Hat subscriptions or quota to use it. See the documentation for the add-on to learn more about the requirements and for instructions for using the add-on. To learn about add-ons for OpenShift Dedicated, see Add-on services available for OpenShift Dedicated . To learn about add-ons for ROSA, see Add-on services available for Red Hat OpenShift Service on AWS . Additional resources See Managing your add-on services for more information about managing your add-ons. 1.8. OpenShift cluster notifications By default, you will receive email notifications about OpenShift cluster events. Note All options are not available for all services. You cannot disable OpenShift notifications on this page because your cluster is managed by Red Hat. These notifications are the primary way that Red Hat Site Reliability Engineering (SRE) will contact you to inform you about cluster problems and request actions you must take to resolve them. Cluster owners cannot unsubscribe from email notifications. If you are not a cluster owner and you do not want to receive notification emails, you can ask your cluster owner or administrator to remove you from the list of cluster notification contacts as described in Removing notification contacts from your cluster . In addition to email notifications, Red Hat Hybrid Cloud Console Organization Administrators, Cloud Administrators, or users with Notifications Administrator permissions can configure cluster event notifications through third-party products such as Slack, Google Chat, and Microsoft Teams. For more information about Hybrid Cloud Console notifications integrations, see Integrating the Red Hat Hybrid Cloud Console with third-party applications .
[ "https://console.redhat.com/openshift/" ]
https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html/managing_clusters/assembly-what-is-ocm
Chapter 12. VLAN-aware instances
Chapter 12. VLAN-aware instances 12.1. Overview of VLAN-aware instances Instances can send and receive VLAN-tagged traffic over a single vNIC. This is particularly useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single vNIC to serve multiple customers or services. For example, the project data network can use VLANs, or tunneling (VXLAN/GRE) segmentation, while the instances see the traffic tagged with VLAN IDs. As a result, network packets are tagged just before they are injected to the instance and do not need to be tagged throughout the entire network. To implement VLAN-tagged traffic, create a parent port and attach the new port to an existing neutron network. When you attach the new port, OpenStack Networking adds a trunk connection to the parent port you created. , create subports. These subports connect VLANs to instances, which allow connectivity to the trunk. Within the instance operating system, you must also create a sub-interface that tags traffic for the VLAN associated with the subport. 12.2. Reviewing the trunk plug-in During a Red Hat openStack deployment, the trunk plug-in is enabled by default. You can review the configuration on the controller nodes: On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-data/neutron/etc/neutron/neutron.conf file: 12.3. Creating a trunk connection Identify the network that requires the trunk port connection. This would be the network that will contain the instance that requires access to the trunked VLANs. In this example, this is the public network: Create the parent trunk port, and attach it to the network that the instance connects to. In this example, create a neutron port named parent-trunk-port on the public network. This trunk is the parent port, as you can use it to create subports . Create a trunk using the port that you created in step 2. In this example the trunk is named parent-trunk . View the trunk connection: View the details of the trunk connection: 12.4. Adding subports to the trunk Create a neutron port. This port is a subport connection to the trunk. You must also specify the MAC address that you assigned to the parent port: Note If you receive the error HttpException: Conflict , confirm that you are creating the subport on a different network to the one that has the parent trunk port. This example uses the public network for the parent trunk port, and private for the subport. Associate the port with the trunk ( parent-trunk ), and specify the VLAN ID ( 55 ): 12.5. Configuring an instance to use a trunk You must configure the instance operating system to use the MAC address that neutron assigned to the subport. You can also configure the subport to use a specific MAC address during the subport creation step. Review the configuration of your network trunk: Create an instance that uses the parent port-id as its vNIC: 12.6. Understanding trunk states ACTIVE : The trunk is working as expected and there are no current requests. DOWN : The virtual and physical resources for the trunk are not in sync. This can be a temporary state during negotiation. BUILD : There has been a request and the resources are being provisioned. After successful completion the trunk returns to ACTIVE . DEGRADED : The provisioning request did not complete, so the trunk has only been partially provisioned. It is recommended to remove the subports and try again. ERROR : The provisioning request was unsuccessful. Remove the resource that caused the error to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as this can cause more issues.
[ "service_plugins=router,qos,trunk", "openstack network list +--------------------------------------+---------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------+--------------------------------------+ | 82845092-4701-4004-add7-838837837621 | private | 434c7982-cd96-4c41-a8c9-b93adbdcb197 | | 8d8bc6d6-5b28-4e00-b99e-157516ff0050 | public | 3fd811b4-c104-44b5-8ff8-7a86af5e332c | +--------------------------------------+---------+--------------------------------------+", "openstack port create --network public parent-trunk-port +-----------------------+-----------------------------------------------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:02:33Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='172.24.4.230', subnet_id='dc608964-9af3-4fed-9f06-6d3844fb9b9b' | | headers | | | id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | mac_address | fa:16:3e:33:c4:75 | | name | parent-trunk-port | | network_id | 871a6bd8-4193-45d7-a300-dcb2420e7cc3 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:02:33Z | +-----------------------+-----------------------------------------------------------------------------+", "openstack network trunk create --parent-port parent-trunk-port parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+", "openstack network trunk list +--------------------------------------+--------------+--------------------------------------+-------------+ | ID | Name | Parent Port | Description | +--------------------------------------+--------------+--------------------------------------+-------------+ | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | parent-trunk | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | +--------------------------------------+--------------+--------------------------------------+-------------+", "openstack network trunk show parent-trunk +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 1 | | status | DOWN | | sub_ports | | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:05:17Z | +-----------------+--------------------------------------+", "openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport-trunk-port +-----------------------+--------------------------------------------------------------------------+ | Field | Value | +-----------------------+--------------------------------------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-10-20T02:08:14Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.11', subnet_id='1a299780-56df-4c0b-a4c0-c5a612cef2e8' | | headers | | | id | 479d742e-dd00-4c24-8dd6-b7297fab3ee9 | | mac_address | fa:16:3e:33:c4:75 | | name | subport-trunk-port | | network_id | 3fe6b758-8613-4b17-901e-9ba30a7c4b51 | | project_id | 745d33000ac74d30a77539f8920555e7 | | project_id | 745d33000ac74d30a77539f8920555e7 | | revision_number | 4 | | security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 | | status | DOWN | | updated_at | 2016-10-20T02:08:15Z | +-----------------------+--------------------------------------------------------------------------+", "openstack network trunk set --subport port=subport-trunk-port,segmentation-type=vlan,segmentation-id=55 parent-trunk", "openstack network trunk list +--------------------------------------+--------------+--------------------------------------+-------------+ | ID | Name | Parent Port | Description | +--------------------------------------+--------------+--------------------------------------+-------------+ | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | parent-trunk | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | +--------------------------------------+--------------+--------------------------------------+-------------+ openstack network trunk show parent-trunk +-----------------+------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------+------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | created_at | 2016-10-20T02:05:17Z | | description | | | id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | | name | parent-trunk | | port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 | | revision_number | 2 | | status | DOWN | | sub_ports | port_id='479d742e-dd00-4c24-8dd6-b7297fab3ee9', segmentation_id='55', segmentation_type='vlan' | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated_at | 2016-10-20T02:10:06Z | +-----------------+------------------------------------------------------------------------------------------------+", "nova boot --image cirros --flavor m1.tiny testInstance --security-groups default --key-name sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 +--------------------------------------+-----------------------------------------------+ | Property | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | testinstance | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-juqco0el | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | uMyL8PnZRBwQ | | config_drive | | | created | 2016-10-20T03:02:51Z | | description | - | | flavor | m1.tiny (1) | | hostId | | | host_status | | | id | 88b7aede-1305-4d91-a180-67e7eac8b70d | | image | cirros (568372f7-15df-4e61-a05f-10954f79a3c4) | | key_name | sshaccess | | locked | False | | metadata | {} | | name | testInstance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 745d33000ac74d30a77539f8920555e7 | | updated | 2016-10-20T03:02:51Z | | user_id | 8c4aea738d774967b4ef388eb41fef5e | +--------------------------------------+-----------------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-trunk-vlan
Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster
Chapter 10. Adding more RHEL compute machines to an OpenShift Container Platform cluster If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. 10.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.12, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 10.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 8.6 and later with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. Additional resources Deleting nodes 10.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct RHEL version needed for the compute machines is selected. 10.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 8.4 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-8.4*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-8.4*" , then RHEL 8.4 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 8.4 or 8.5. Example output ------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+ Additional resources You may also manually import RHEL images to AWS . 10.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. Ensure NetworkManager is enabled and configured to control all interfaces on the host. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.12: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.12-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 10.5. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 10.6. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 10.7. Adding more RHEL compute machines to your cluster You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.12 cluster. Prerequisites Your OpenShift Container Platform cluster already contains RHEL compute nodes. The hosts file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. The kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. You must prepare the RHEL hosts for installation. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. If you use SSH key-based authentication, you must manage the key with an SSH agent. Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Procedure Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables. Rename the [new_workers] section of the file to [workers] . Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: In this example, the mycluster-rhel8-0.example.com and mycluster-rhel8-1.example.com machines are in the cluster and you add the mycluster-rhel8-2.example.com and mycluster-rhel8-3.example.com machines. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the scaleup playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 10.8. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.9. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file.
[ "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.12-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_management/more-rhel-compute
Appendix D. The sysconfig Directory
Appendix D. The sysconfig Directory This appendix outlines some of the files and directories found in the /etc/sysconfig/ directory, their function, and their contents. The information in this appendix is not intended to be complete, as many of these files have a variety of options that are only used in very specific or rare circumstances. Note The actual content of your /etc/sysconfig/ directory depends on the programs you have installed on your machine. To find the name of the package the configuration file belongs to, type the following at a shell prompt: See Section 8.2.4, "Installing Packages" for more information on how to install new packages in Red Hat Enterprise Linux. D.1. Files in the /etc/sysconfig/ Directory The following sections offer descriptions of files normally found in the /etc/sysconfig/ directory. D.1.1. /etc/sysconfig/arpwatch The /etc/sysconfig/arpwatch file is used to pass arguments to the arpwatch daemon at boot time. By default, it contains the following option: OPTIONS= value Additional options to be passed to the arpwatch daemon. For example: D.1.2. /etc/sysconfig/authconfig The /etc/sysconfig/authconfig file sets the authorization to be used on the host. By default, it contains the following options: USEMKHOMEDIR= boolean A Boolean to enable ( yes ) or disable ( no ) creating a home directory for a user on the first login. For example: USEPAMACCESS= boolean A Boolean to enable ( yes ) or disable ( no ) the PAM authentication. For example: USESSSDAUTH= boolean A Boolean to enable ( yes ) or disable ( no ) the SSSD authentication. For example: USESHADOW= boolean A Boolean to enable ( yes ) or disable ( no ) shadow passwords. For example: USEWINBIND= boolean A Boolean to enable ( yes ) or disable ( no ) using Winbind for user account configuration. For example: USEDB= boolean A Boolean to enable ( yes ) or disable ( no ) the FAS authentication. For example: USEFPRINTD= boolean A Boolean to enable ( yes ) or disable ( no ) the fingerprint authentication. For example: FORCESMARTCARD= boolean A Boolean to enable ( yes ) or disable ( no ) enforcing the smart card authentication. For example: PASSWDALGORITHM= value The password algorithm. The value can be bigcrypt , descrypt , md5 , sha256 , or sha512 . For example: USELDAPAUTH= boolean A Boolean to enable ( yes ) or disable ( no ) the LDAP authentication. For example: USELOCAUTHORIZE= boolean A Boolean to enable ( yes ) or disable ( no ) the local authorization for local users. For example: USECRACKLIB= boolean A Boolean to enable ( yes ) or disable ( no ) using the CrackLib. For example: USEWINBINDAUTH= boolean A Boolean to enable ( yes ) or disable ( no ) the Winbind authentication. For example: USESMARTCARD= boolean A Boolean to enable ( yes ) or disable ( no ) the smart card authentication. For example: USELDAP= boolean A Boolean to enable ( yes ) or disable ( no ) using LDAP for user account configuration. For example: USENIS= boolean A Boolean to enable ( yes ) or disable ( no ) using NIS for user account configuration. For example: USEKERBEROS= boolean A Boolean to enable ( yes ) or disable ( no ) the Kerberos authentication. For example: USESYSNETAUTH= boolean A Boolean to enable ( yes ) or disable ( no ) authenticating system accounts with network services. For example: USESMBAUTH= boolean A Boolean to enable ( yes ) or disable ( no ) the SMB authentication. For example: USESSSD= boolean A Boolean to enable ( yes ) or disable ( no ) using SSSD for obtaining user information. For example: USEHESIOD= boolean A Boolean to enable ( yes ) or disable ( no ) using the Hesoid name service. For example: See Chapter 13, Configuring Authentication for more information on this topic. D.1.3. /etc/sysconfig/autofs The /etc/sysconfig/autofs file defines custom options for the automatic mounting of devices. This file controls the operation of the automount daemons, which automatically mount file systems when you use them and unmount them after a period of inactivity. File systems can include network file systems, CD-ROM drives, diskettes, and other media. By default, it contains the following options: MASTER_MAP_NAME= value The default name for the master map. For example: TIMEOUT= value The default mount timeout. For example: NEGATIVE_TIMEOUT= value The default negative timeout for unsuccessful mount attempts. For example: MOUNT_WAIT= value The time to wait for a response from mount . For example: UMOUNT_WAIT= value The time to wait for a response from umount . For example: BROWSE_MODE= boolean A Boolean to enable ( yes ) or disable ( no ) browsing the maps. For example: MOUNT_NFS_DEFAULT_PROTOCOL= value The default protocol to be used by mount.nfs . For example: APPEND_OPTIONS= boolean A Boolean to enable ( yes ) or disable ( no ) appending the global options instead of replacing them. For example: LOGGING= value The default logging level. The value has to be either none , verbose , or debug . For example: LDAP_URI= value A space-separated list of server URIs in the form of protocol :// server . For example: LDAP_TIMEOUT= value The synchronous API calls timeout. For example: LDAP_NETWORK_TIMEOUT= value The network response timeout. For example: SEARCH_BASE= value The base Distinguished Name (DN) for the map search. For example: AUTH_CONF_FILE= value The default location of the SASL authentication configuration file. For example: MAP_HASH_TABLE_SIZE= value The hash table size for the map cache. For example: USE_MISC_DEVICE= boolean A Boolean to enable ( yes ) or disable ( no ) using the autofs miscellaneous device. For example: OPTIONS= value Additional options to be passed to the LDAP daemon. For example: D.1.4. /etc/sysconfig/clock The /etc/sysconfig/clock file controls the interpretation of values read from the system hardware clock. It is used by the Date/Time Properties tool, and should not be edited by hand. By default, it contains the following option: ZONE= value The time zone file under /usr/share/zoneinfo that /etc/localtime is a copy of. For example: See Section 2.1, "Date/Time Properties Tool" for more information on the Date/Time Properties tool and its usage. D.1.5. /etc/sysconfig/dhcpd The /etc/sysconfig/dhcpd file is used to pass arguments to the dhcpd daemon at boot time. By default, it contains the following options: DHCPDARGS= value Additional options to be passed to the dhcpd daemon. For example: See Chapter 16, DHCP Servers for more information on DHCP and its usage. D.1.6. /etc/sysconfig/firstboot The /etc/sysconfig/firstboot file defines whether to run the firstboot utility. By default, it contains the following option: RUN_FIRSTBOOT= boolean A Boolean to enable ( YES ) or disable ( NO ) running the firstboot program. For example: The first time the system boots, the init program calls the /etc/rc.d/init.d/firstboot script, which looks for the /etc/sysconfig/firstboot file. If this file does not contain the RUN_FIRSTBOOT=NO option, the firstboot program is run, guiding a user through the initial configuration of the system. Note To start the firstboot program the time the system boots, change the value of RUN_FIRSTBOOT option to YES , and type the following at a shell prompt: D.1.7. /etc/sysconfig/i18n The /etc/sysconfig/i18n configuration file defines the default language, any supported languages, and the default system font. By default, it contains the following options: LANG= value The default language. For example: SUPPORTED= value A colon-separated list of supported languages. For example: SYSFONT= value The default system font. For example: D.1.8. /etc/sysconfig/init The /etc/sysconfig/init file controls how the system appears and functions during the boot process. By default, it contains the following options: BOOTUP= value The bootup style. The value has to be either color (the standard color boot display), verbose (an old style display which provides more information), or anything else for the new style display, but without ANSI formatting. For example: RES_COL= value The number of the column in which the status labels start. For example: MOVE_TO_COL= value The terminal sequence to move the cursor to the column specified in RES_COL (see above). For example: SETCOLOR_SUCCESS= value The terminal sequence to set the success color. For example: SETCOLOR_FAILURE= value The terminal sequence to set the failure color. For example: SETCOLOR_WARNING= value The terminal sequence to set the warning color. For example: SETCOLOR_NORMAL= value The terminal sequence to set the default color. For example: LOGLEVEL= value The initial console logging level. The value has to be in the range from 1 (kernel panics only) to 8 (everything, including the debugging information). For example: PROMPT= boolean A Boolean to enable ( yes ) or disable ( no ) the hotkey interactive startup. For example: AUTOSWAP= boolean A Boolean to enable ( yes ) or disable ( no ) probing for devices with swap signatures. For example: ACTIVE_CONSOLES= value The list of active consoles. For example: SINGLE= value The single-user mode type. The value has to be either /sbin/sulogin (a user will be prompted for a password to log in), or /sbin/sushell (the user will be logged in directly). For example: D.1.9. /etc/sysconfig/ip6tables-config The /etc/sysconfig/ip6tables-config file stores information used by the kernel to set up IPv6 packet filtering at boot time or whenever the ip6tables service is started. Note that you should not modify it unless you are familiar with ip6tables rules. By default, it contains the following options: IP6TABLES_MODULES= value A space-separated list of helpers to be loaded after the firewall rules are applied. For example: IP6TABLES_MODULES_UNLOAD= boolean A Boolean to enable ( yes ) or disable ( no ) module unloading when the firewall is stopped or restarted. For example: IP6TABLES_SAVE_ON_STOP= boolean A Boolean to enable ( yes ) or disable ( no ) saving the current firewall rules when the firewall is stopped. For example: IP6TABLES_SAVE_ON_RESTART= boolean A Boolean to enable ( yes ) or disable ( no ) saving the current firewall rules when the firewall is restarted. For example: IP6TABLES_SAVE_COUNTER= boolean A Boolean to enable ( yes ) or disable ( no ) saving the rule and chain counters. For example: IP6TABLES_STATUS_NUMERIC= boolean A Boolean to enable ( yes ) or disable ( no ) printing IP addresses and port numbers in a numeric format in the status output. For example: IP6TABLES_STATUS_VERBOSE= boolean A Boolean to enable ( yes ) or disable ( no ) printing information about the number of packets and bytes in the status output. For example: IP6TABLES_STATUS_LINENUMBERS= boolean A Boolean to enable ( yes ) or disable ( no ) printing line numbers in the status output. For example: Note You can create the rules manually using the ip6tables command. Once created, type the following at a shell prompt: This will add the rules to /etc/sysconfig/ip6tables . Once this file exists, any firewall rules saved in it persist through a system reboot or a service restart. D.1.10. /etc/sysconfig/kernel The /etc/sysconfig/kernel configuration file controls the kernel selection at boot by using these two options: UPDATEDEFAULT=yes This option makes a newly installed kernel as the default in the boot entry selection. DEFAULTKERNEL=kernel This option specifies what package type will be used as the default. D.1.10.1. Keeping an old kernel version as the default To keep an old kernel version as the default in the boot entry selection: Comment out the UPDATEDEFAULT option in /etc/sysconfig/kernel as follows: D.1.10.2. Setting a kernel debugger as the default kernel To set kernel debugger as the default kernel in boot entry selection: Edit the /etc/sysconfig/kernel configuration file as follows: D.1.11. /etc/sysconfig/keyboard The /etc/sysconfig/keyboard file controls the behavior of the keyboard. By default, it contains the following options: KEYTABLE= value The name of a keytable file. The files that can be used as keytables start in the /lib/kbd/keymaps/i386/ directory, and branch into different keyboard layouts from there, all labeled value .kmap.gz . The first file name that matches the KEYTABLE setting is used. For example: MODEL= value The keyboard model. For example: LAYOUT= value The keyboard layout. For example: KEYBOARDTYPE= value The keyboard type. Allowed values are pc (a PS/2 keyboard), or sun (a Sun keyboard). For example: D.1.12. /etc/sysconfig/ldap The /etc/sysconfig/ldap file holds the basic configuration for the LDAP server. By default, it contains the following options: SLAPD_OPTIONS= value Additional options to be passed to the slapd daemon. For example: SLURPD_OPTIONS= value Additional options to be passed to the slurpd daemon. For example: SLAPD_LDAP= boolean A Boolean to enable ( yes ) or disable ( no ) using the LDAP over TCP (that is, ldap:/// ). For example: SLAPD_LDAPI= boolean A Boolean to enable ( yes ) or disable ( no ) using the LDAP over IPC (that is, ldapi:/// ). For example: SLAPD_LDAPS= boolean A Boolean to enable ( yes ) or disable ( no ) using the LDAP over TLS (that is, ldaps:/// ). For example: SLAPD_URLS= value A space-separated list of URLs. For example: SLAPD_SHUTDOWN_TIMEOUT= value The time to wait for slapd to shut down. For example: SLAPD_ULIMIT_SETTINGS= value The parameters to be passed to ulimit before the slapd daemon is started. For example: See Section 20.1, "OpenLDAP" for more information on LDAP and its configuration. D.1.13. /etc/sysconfig/named The /etc/sysconfig/named file is used to pass arguments to the named daemon at boot time. By default, it contains the following options: ROOTDIR= value The chroot environment under which the named daemon runs. The value has to be a full directory path. For example: Note that the chroot environment has to be configured first (type info chroot at a shell prompt for more information). OPTIONS= value Additional options to be passed to named . For example: Note that you should not use the -t option. Instead, use ROOTDIR as described above. KEYTAB_FILE= value The keytab file name. For example: See Section 17.2, "BIND" for more information on the BIND DNS server and its configuration. D.1.14. /etc/sysconfig/network The /etc/sysconfig/network file is used to specify information about the desired network configuration. By default, it contains the following options: NETWORKING= boolean A Boolean to enable ( yes ) or disable ( no ) networking. For example: HOSTNAME= value The host name of the machine. For example: The file may also contain some of the following options: GATEWAY= value The IP address of the network's gateway. For example: This is used as the default gateway when there is no GATEWAY directive in an interface's ifcfg file. NM_BOND_VLAN_ENABLED= boolean A Boolean to allow ( yes ) or disallow ( no ) the NetworkManager application from detecting and managing bonding, bridging, and VLAN interfaces. For example: The NM_CONTROLLED directive is dependent on this option. Note If you want to completely disable IPv6, you should add these lines to /etc/sysctl.conf: In addition, adding ipv6.disable=1 to the kernel command line will disable the kernel module net-pf-10 which implements IPv6. Warning Do not use custom init scripts to configure network settings. When performing a post-boot network service restart, custom init scripts configuring network settings that are run outside of the network init script lead to unpredictable results. D.1.15. /etc/sysconfig/ntpd The /etc/sysconfig/ntpd file is used to pass arguments to the ntpd daemon at boot time. By default, it contains the following option: OPTIONS= value Additional options to be passed to ntpd . For example: See Section 2.1.2, "Network Time Protocol Properties" or Section 2.2.2, "Network Time Protocol Setup" for more information on how to configure the ntpd daemon. D.1.16. /etc/sysconfig/quagga The /etc/sysconfig/quagga file holds the basic configuration for Quagga daemons. By default, it contains the following options: QCONFDIR= value The directory with the configuration files for Quagga daemons. For example: BGPD_OPTS= value Additional options to be passed to the bgpd daemon. For example: OSPF6D_OPTS= value Additional options to be passed to the ospf6d daemon. For example: OSPFD_OPTS= value Additional options to be passed to the ospfd daemon. For example: RIPD_OPTS= value Additional options to be passed to the ripd daemon. For example: RIPNGD_OPTS= value Additional options to be passed to the ripngd daemon. For example: ZEBRA_OPTS= value Additional options to be passed to the zebra daemon. For example: ISISD_OPTS= value Additional options to be passed to the isisd daemon. For example: WATCH_OPTS= value Additional options to be passed to the watchquagga daemon. For example: WATCH_DAEMONS= value A space separated list of monitored daemons. For example: D.1.17. /etc/sysconfig/radvd The /etc/sysconfig/radvd file is used to pass arguments to the radvd daemon at boot time. By default, it contains the following option: OPTIONS= value Additional options to be passed to the radvd daemon. For example: D.1.18. /etc/sysconfig/samba The /etc/sysconfig/samba file is used to pass arguments to the Samba daemons at boot time. By default, it contains the following options: SMBDOPTIONS= value Additional options to be passed to smbd . For example: NMBDOPTIONS= value Additional options to be passed to nmbd . For example: WINBINDOPTIONS= value Additional options to be passed to winbindd . For example: See Section 21.1, "Samba" for more information on Samba and its configuration. D.1.19. /etc/sysconfig/saslauthd The /etc/sysconfig/saslauthd file is used to control which arguments are passed to saslauthd , the SASL authentication server. By default, it contains the following options: SOCKETDIR= value The directory for the saslauthd 's listening socket. For example: MECH= value The authentication mechanism to use to verify user passwords. For example: DAEMONOPTS= value Options to be passed to the daemon() function that is used by the /etc/rc.d/init.d/saslauthd init script to start the saslauthd service. For example: FLAGS= value Additional options to be passed to the saslauthd service. For example: D.1.20. /etc/sysconfig/selinux The /etc/sysconfig/selinux file contains the basic configuration options for SELinux. It is a symbolic link to /etc/selinux/config , and by default, it contains the following options: SELINUX= value The security policy. The value can be either enforcing (the security policy is always enforced), permissive (instead of enforcing the policy, appropriate warnings are displayed), or disabled (no policy is used). For example: SELINUXTYPE= value The protection type. The value can be either targeted (the targeted processes are protected), or mls (the Multi Level Security protection). For example: D.1.21. /etc/sysconfig/sendmail The /etc/sysconfig/sendmail is used to set the default values for the Sendmail application. By default, it contains the following values: DAEMON= boolean A Boolean to enable ( yes ) or disable ( no ) running sendmail as a daemon. For example: QUEUE= value The interval at which the messages are to be processed. For example: See Section 19.3.2, "Sendmail" for more information on Sendmail and its configuration. D.1.22. /etc/sysconfig/spamassassin The /etc/sysconfig/spamassassin file is used to pass arguments to the spamd daemon (a daemonized version of Spamassassin ) at boot time. By default, it contains the following option: SPAMDOPTIONS= value Additional options to be passed to the spamd daemon. For example: See Section 19.4.2.6, "Spam Filters" for more information on Spamassassin and its configuration. D.1.23. /etc/sysconfig/squid The /etc/sysconfig/squid file is used to pass arguments to the squid daemon at boot time. By default, it contains the following options: SQUID_OPTS= value Additional options to be passed to the squid daemon. For example: SQUID_SHUTDOWN_TIMEOUT= value The time to wait for squid daemon to shut down. For example: SQUID_CONF= value The default configuration file. For example: D.1.24. /etc/sysconfig/system-config-users The /etc/sysconfig/system-config-users file is the configuration file for the User Manager utility, and should not be edited by hand. By default, it contains the following options: FILTER= boolean A Boolean to enable ( true ) or disable ( false ) filtering of system users. For example: ASSIGN_HIGHEST_UID= boolean A Boolean to enable ( true ) or disable ( false ) assigning the highest available UID to newly added users. For example: ASSIGN_HIGHEST_GID= boolean A Boolean to enable ( true ) or disable ( false ) assigning the highest available GID to newly added groups. For example: PREFER_SAME_UID_GID= boolean A Boolean to enable ( true ) or disable ( false ) using the same UID and GID for newly added users when possible. For example: See Section 3.2, "Managing Users via the User Manager Application" for more information on User Manager and its usage. D.1.25. /etc/sysconfig/vncservers The /etc/sysconfig/vncservers file configures the way the Virtual Network Computing ( VNC ) server starts up. By default, it contains the following options: VNCSERVERS= value A list of space separated display : username pairs. For example: VNCSERVERARGS[ display ]= value Additional arguments to be passed to the VNC server running on the specified display . For example: D.1.26. /etc/sysconfig/xinetd The /etc/sysconfig/xinetd file is used to pass arguments to the xinetd daemon at boot time. By default, it contains the following options: EXTRAOPTIONS= value Additional options to be passed to xinetd . For example: XINETD_LANG= value The locale information to be passed to every service started by xinetd . Note that to remove locale information from the xinetd environment, you can use an empty string ( "" ) or none . For example: See Chapter 12, Services and Daemons for more information on how to configure the xinetd services.
[ "~]USD yum provides /etc/sysconfig/ filename", "OPTIONS=\"-u arpwatch -e root -s 'root (Arpwatch)'\"", "USEMKHOMEDIR=no", "USEPAMACCESS=no", "USESSSDAUTH=no", "USESHADOW=yes", "USEWINBIND=no", "USEDB=no", "USEFPRINTD=yes", "FORCESMARTCARD=no", "PASSWDALGORITHM=sha512", "USELDAPAUTH=no", "USELOCAUTHORIZE=yes", "USECRACKLIB=yes", "USEWINBINDAUTH=no", "USESMARTCARD=no", "USELDAP=no", "USENIS=no", "USEKERBEROS=no", "USESYSNETAUTH=no", "USESMBAUTH=no", "USESSSD=no", "USEHESIOD=no", "MASTER_MAP_NAME=\"auto.master\"", "TIMEOUT=300", "NEGATIVE_TIMEOUT=60", "MOUNT_WAIT=-1", "UMOUNT_WAIT=12", "BROWSE_MODE=\"no\"", "MOUNT_NFS_DEFAULT_PROTOCOL=4", "APPEND_OPTIONS=\"yes\"", "LOGGING=\"none\"", "LDAP_URI=\"ldaps://ldap.example.com/\"", "LDAP_TIMEOUT=-1", "LDAP_NETWORK_TIMEOUT=8", "SEARCH_BASE=\"\"", "AUTH_CONF_FILE=\"/etc/autofs_ldap_auth.conf\"", "MAP_HASH_TABLE_SIZE=1024", "USE_MISC_DEVICE=\"yes\"", "OPTIONS=\"\"", "ZONE=\"Europe/Prague\"", "DHCPDARGS=", "RUN_FIRSTBOOT=NO", "~]# chkconfig firstboot on", "LANG=\"en_US.UTF-8\"", "SUPPORTED=\"en_US.UTF-8:en_US:en\"", "SYSFONT=\"latarcyrheb-sun16\"", "BOOTUP=color", "RES_COL=60", "MOVE_TO_COL=\"echo -en \\\\033[USD{RES_COL}G\"", "SETCOLOR_SUCCESS=\"echo -en \\\\033[0;32m\"", "SETCOLOR_FAILURE=\"echo -en \\\\033[0;31m\"", "SETCOLOR_WARNING=\"echo -en \\\\033[0;33m\"", "SETCOLOR_NORMAL=\"echo -en \\\\033[0;39m\"", "LOGLEVEL=3", "PROMPT=yes", "AUTOSWAP=no", "ACTIVE_CONSOLES=/dev/tty[1-6]", "SINGLE=/sbin/sushell", "IP6TABLES_MODULES=\"ip_nat_ftp ip_nat_irc\"", "IP6TABLES_MODULES_UNLOAD=\"yes\"", "IP6TABLES_SAVE_ON_STOP=\"no\"", "IP6TABLES_SAVE_ON_RESTART=\"no\"", "IP6TABLES_SAVE_COUNTER=\"no\"", "IP6TABLES_STATUS_NUMERIC=\"yes\"", "IP6TABLES_STATUS_VERBOSE=\"no\"", "IP6TABLES_STATUS_LINENUMBERS=\"yes\"", "~]# service ip6tables save", "UPDATEDEFAULT=yes", "DEFAULTKERNEL=kernel-debug", "KEYTABLE=\"us\"", "MODEL=\"pc105+inet\"", "LAYOUT=\"us\"", "KEYBOARDTYPE=\"pc\"", "SLAPD_OPTIONS=\"-4\"", "SLURPD_OPTIONS=\"\"", "SLAPD_LDAP=\"yes\"", "SLAPD_LDAPI=\"no\"", "SLAPD_LDAPS=\"no\"", "SLAPD_URLS=\"ldapi:///var/lib/ldap_root/ldapi ldapi:/// ldaps:///\"", "SLAPD_SHUTDOWN_TIMEOUT=3", "SLAPD_ULIMIT_SETTINGS=\"\"", "ROOTDIR=\"/var/named/chroot\"", "OPTIONS=\"-6\"", "KEYTAB_FILE=\"/etc/named.keytab\"", "NETWORKING=yes", "HOSTNAME=penguin.example.com", "GATEWAY=192.168.1.1", "NM_BOND_VLAN_ENABLED=yes", "net.ipv6.conf.all.disable_ipv6=1", "net.ipv6.conf.default.disable_ipv6=1", "OPTIONS=\"-u ntp:ntp -p /var/run/ntpd.pid -g\"", "QCONFDIR=\"/etc/quagga\"", "BGPD_OPTS=\"-A 127.0.0.1 -f USD{QCONFDIR}/bgpd.conf\"", "OSPF6D_OPTS=\"-A ::1 -f USD{QCONFDIR}/ospf6d.conf\"", "OSPFD_OPTS=\"-A 127.0.0.1 -f USD{QCONFDIR}/ospfd.conf\"", "RIPD_OPTS=\"-A 127.0.0.1 -f USD{QCONFDIR}/ripd.conf\"", "RIPNGD_OPTS=\"-A ::1 -f USD{QCONFDIR}/ripngd.conf\"", "ZEBRA_OPTS=\"-A 127.0.0.1 -f USD{QCONFDIR}/zebra.conf\"", "ISISD_OPTS=\"-A ::1 -f USD{QCONFDIR}/isisd.conf\"", "WATCH_OPTS=\"-Az -b_ -r/sbin/service_%s_restart -s/sbin/service_%s_start -k/sbin/service_%s_stop\"", "WATCH_DAEMONS=\"zebra bgpd ospfd ospf6d ripd ripngd\"", "OPTIONS=\"-u radvd\"", "SMBDOPTIONS=\"-D\"", "NMBDOPTIONS=\"-D\"", "WINBINDOPTIONS=\"\"", "SOCKETDIR=/var/run/saslauthd", "MECH=pam", "DAEMONOPTS=\"--user saslauth\"", "FLAGS=", "SELINUX=enforcing", "SELINUXTYPE=targeted", "DAEMON=yes", "QUEUE=1h", "SPAMDOPTIONS=\"-d -c -m5 -H\"", "SQUID_OPTS=\"\"", "SQUID_SHUTDOWN_TIMEOUT=100", "SQUID_CONF=\"/etc/squid/squid.conf\"", "FILTER=true", "ASSIGN_HIGHEST_UID=true", "ASSIGN_HIGHEST_GID=true", "PREFER_SAME_UID_GID=true", "VNCSERVERS=\"2:myusername\"", "VNCSERVERARGS[2]=\"-geometry 800x600 -nolisten tcp -localhost\"", "EXTRAOPTIONS=\"\"", "XINETD_LANG=\"en_US\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-The_sysconfig_Directory
Chapter 8. Troubleshooting Builds
Chapter 8. Troubleshooting Builds The builder instances started by the build manager are ephemeral. This means that they will either get shut down by Red Hat Quay on timeouts or failure, or garbage collected by the control plane (EC2/K8s). In order to obtain the builds logs, you must do so while the builds are running. 8.1. DEBUG config flag The DEBUG flag can be set to true in order to prevent the builder instances from getting cleaned up after completion or failure. For example: EXECUTORS: - EXECUTOR: ec2 DEBUG: true ... - EXECUTOR: kubernetes DEBUG: true ... When set to true , the debug feature prevents the build nodes from shutting down after the quay-builder service is done or fails. It also prevents the build manager from cleaning up the instances by terminating EC2 instances or deleting Kubernetes jobs. This allows debugging builder node issues. Debugging should not be set in a production cycle. The lifetime service still exists; for example, the instance still shuts down after approximately two hours. When this happens, EC2 instances are terminated and Kubernetes jobs are completed. Enabling debug also affects the ALLOWED_WORKER_COUNT because the unterminated instances and jobs still count toward the total number of running workers. As a result, the existing builder workers must be manually deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new builds . 8.2. Troubleshooting OpenShift Container Platform and Kubernetes Builds Use the following procedure to troubleshooting OpenShift Container Platform Kubernetes Builds. Procedure Create a port forwarding tunnel between your local machine and a pod running with either an OpenShift Container Platform cluster or a Kubernetes cluster by entering the following command: USD oc port-forward <builder_pod> 9999:2222 Establish an SSH connection to the remote host using a specified SSH key and port, for example: USD ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost Obtain the quay-builder service logs by entering the following commands: USD systemctl status quay-builder USD journalctl -f -u quay-builder
[ "EXECUTORS: - EXECUTOR: ec2 DEBUG: true - EXECUTOR: kubernetes DEBUG: true", "oc port-forward <builder_pod> 9999:2222", "ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost", "systemctl status quay-builder", "journalctl -f -u quay-builder" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/builders_and_image_automation/troubleshooting-builds
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/7.1_release_notes/making-open-source-more-inclusive
Chapter 19. Creating a performance profile
Chapter 19. Creating a performance profile Learn about the Performance Profile Creator (PPC) and how you can use it to create a performance profile. 19.1. About the Performance Profile Creator The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, used to create the performance profile. The tool consumes must-gather data from the cluster and several user-supplied profile arguments. The PPC generates a performance profile that is appropriate for your hardware and topology. The tool is run by one of the following methods: Invoking podman Calling a wrapper script 19.1.1. Gathering data about your cluster using the must-gather command The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. However, you must still use the performance-addon-operator-must-gather image when running the must-gather command. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to the Performance Addon Operator must gather image. The OpenShift CLI ( oc ) installed. Procedure Optional: Verify that a matching machine config pool exists with a label: USD oc describe mcp/worker-rt Example output Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt If a matching label does not exist add a label for a machine config pool (MCP) that matches with the MCP name: USD oc label mcp <mcp_name> <mcp_name>="" Navigate to the directory where you want to store the must-gather data. Run must-gather on your cluster: USD oc adm must-gather --image=<PAO_must_gather_image> --dest-dir=<dir> Note The must-gather command must be run with the performance-addon-operator-must-gather image. The output can optionally be compressed. Compressed output is required if you are running the Performance Profile Creator wrapper script. Example USD oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.11 --dest-dir=<path_to_must-gather>/must-gather Create a compressed file from the must-gather directory: USD tar cvaf must-gather.tar.gz must-gather/ 19.1.2. Running the Performance Profile Creator using podman As a cluster administrator, you can run podman and the Performance Profile Creator to create a performance profile. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare-metal hardware. A node with podman and OpenShift CLI ( oc ) installed. Access to the Node Tuning Operator image. Procedure Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Use Podman to authenticate to registry.redhat.io : USD podman login registry.redhat.io Username: <username> Password: <password> Optional: Display help for the PPC tool: USD podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 -h Example output A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Run the Performance Profile Creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU ids Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD podman run --entrypoint performance-profile-creator -v <path_to_must-gather>/must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --info log --must-gather-dir-path /must-gather Note This command uses the performance profile creator as a new entry point to podman . It maps the must-gather data for the host into the container image and invokes the required user-supplied profile arguments to produce the my-performance-profile.yaml file. The -v option can be the path to either: The must-gather output directory An existing directory containing the must-gather decompressed tarball The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Run podman : USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --mcp-name=worker-cnf --reserved-cpu-count=4 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=6 > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-39,48-79 offlined: 42-47 reserved: 0-1,40-41 machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true realTime: true Apply the generated profile: USD oc apply -f my-performance-profile.yaml 19.1.2.1. How to run podman to create a performance profile The following example illustrates how to run podman to create a performance profile with 20 reserved CPUs that are to be split across the NUMA nodes. Node hardware configuration: 80 CPUs Hyperthreading enabled Two NUMA nodes Even numbered CPUs run on NUMA node 0 and odd numbered CPUs run on NUMA node 1 Run podman to create the performance profile: USD podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml The created profile is described in the following YAML: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true Note In this case, 10 CPUs are reserved on NUMA node 0 and 10 are reserved on NUMA node 1. 19.1.3. Running the Performance Profile Creator wrapper script The performance profile wrapper script simplifies the running of the Performance Profile Creator (PPC) tool. It hides the complexities associated with running podman and specifying the mapping directories and it enables the creation of the performance profile. Prerequisites Access to the Node Tuning Operator image. Access to the must-gather tarball. Procedure Create a file on your local machine named, for example, run-perf-profile-creator.sh : USD vi run-perf-profile-creator.sh Paste the following code into the file: #!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename "USD0") readonly CMD="USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator" readonly IMG_EXISTS_CMD="USD{CONTAINER_RUNTIME} image exists" readonly IMG_PULL_CMD="USD{CONTAINER_RUNTIME} image pull" readonly MUST_GATHER_VOL="/must-gather" NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11" MG_TARBALL="" DATA_DIR="" usage() { print "Wrapper usage:" print " USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]" print "" print "Options:" print " -h help for USD{CURRENT_SCRIPT}" print " -p Node Tuning Operator image" print " -t path to a must-gather tarball" USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" && USD{CMD} "USD{NTO_IMG}" -h } function cleanup { [ -d "USD{DATA_DIR}" ] && rm -rf "USD{DATA_DIR}" } trap cleanup EXIT exit_error() { print "error: USD*" usage exit 1 } print() { echo "USD*" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" || USD{IMG_PULL_CMD} "USD{NTO_IMG}" || \ exit_error "Node Tuning Operator image not found" [ -n "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory" [ -f "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file not found" DATA_DIR=USD(mktemp -d -t "USD{CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory" tar -zxf "USD{MG_TARBALL}" --directory "USD{DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball" chmod a+rx "USD{DATA_DIR}" return 0 } main() { while getopts ':hp:t:' OPT; do case "USD{OPT}" in h) usage exit 0 ;; p) NTO_IMG="USD{OPTARG}" ;; t) MG_TARBALL="USD{OPTARG}" ;; ?) exit_error "invalid argument: USD{OPTARG}" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v "USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z" "USD{NTO_IMG}" "USD@" --must-gather-dir-path "USD{MUST_GATHER_VOL}" echo "" 1>&2 } main "USD@" Add execute permissions for everyone on this script: USD chmod a+x run-perf-profile-creator.sh Optional: Display the run-perf-profile-creator.sh command usage: USD ./run-perf-profile-creator.sh -h Expected output Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled Note There two types of arguments: Wrapper arguments namely -h , -p and -t PPC arguments 1 Optional: Specify the Node Tuning Operator image. If not set, the default upstream image is used: registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 . 2 -t is a required wrapper script argument and specifies the path to a must-gather tarball. Run the performance profile creator tool in discovery mode: Note Discovery mode inspects your cluster using the output from must-gather . The output produced includes information on: The NUMA cell partitioning with the allocated CPU IDs Whether hyperthreading is enabled Using this information you can set appropriate values for some of the arguments supplied to the Performance Profile Creator tool. USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log Note The info option requires a value which specifies the output format. Possible values are log and JSON. The JSON format is reserved for debugging. Check the machine config pool: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h Create a performance profile: USD ./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml Note The Performance Profile Creator arguments are shown in the Performance Profile Creator arguments table. The following arguments are required: reserved-cpu-count mcp-name rt-kernel The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file: USD cat my-performance-profile.yaml Example output apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: false Apply the generated profile: Note Install the Node Tuning Operator before applying the profile. USD oc apply -f my-performance-profile.yaml 19.1.4. Performance Profile Creator arguments Table 19.1. Performance Profile Creator arguments Argument Description disable-ht Disable hyperthreading. Possible values: true or false . Default: false . Warning If this argument is set to true you should not disable hyperthreading in the BIOS. Disabling hyperthreading is accomplished with a kernel command line argument. info This captures cluster information and is used in discovery mode only. Discovery mode also requires the must-gather-dir-path argument. If any other arguments are set they are ignored. Possible values: log JSON Note These options define the output format with the JSON format being reserved for debugging. Default: log . mcp-name MCP name for example worker-cnf corresponding to the target machines. This parameter is required. must-gather-dir-path Must gather directory path. This parameter is required. When the user runs the tool with the wrapper script must-gather is supplied by the script itself and the user must not specify it. offlined-cpu-count Number of offlined CPUs. Note This must be a natural number greater than 0. If not enough logical processors are offlined then error messages are logged. The messages are: Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1] Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1] power-consumption-mode The power consumption mode. Possible values: default : CPU partitioning with enabled power management and basic low-latency. low-latency : Enhanced measures to improve latency figures. ultra-low-latency : Priority given to optimal latency, at the expense of power management. Default: default . profile-name Name of the performance profile to create. Default: performance . reserved-cpu-count Number of reserved CPUs. This parameter is required. Note This must be a natural number. A value of 0 is not allowed. rt-kernel Enable real-time kernel. This parameter is required. Possible values: true or false . split-reserved-cpus-across-numa Split the reserved CPUs across NUMA nodes. Possible values: true or false . Default: false . topology-manager-policy Kubelet Topology Manager policy of the performance profile to be created. Possible values: single-numa-node best-effort restricted Default: restricted . user-level-networking Run with user level networking (DPDK) enabled. Possible values: true or false . Default: false . 19.2. Reference performance profiles 19.2.1. A performance profile template for clusters that use OVS-DPDK on OpenStack To maximize machine performance in a cluster that uses Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you can use a performance profile. You can use the following performance profile template to create a profile for your deployment. A performance profile template for clusters that use OVS-DPDK apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true Insert values that are appropriate for your configuration for the CPU_ISOLATED , CPU_RESERVED , and HUGEPAGES_COUNT keys. To learn how to create and use performance profiles, see the "Creating a performance profile" page in the "Scalability and performance" section of the OpenShift Container Platform documentation. 19.3. Additional resources For more information about the must-gather tool, see Gathering data about your cluster .
[ "oc describe mcp/worker-rt", "Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt", "oc label mcp <mcp_name> <mcp_name>=\"\"", "oc adm must-gather --image=<PAO_must_gather_image> --dest-dir=<dir>", "oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.11 --dest-dir=<path_to_must-gather>/must-gather", "tar cvaf must-gather.tar.gz must-gather/", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h", "podman login registry.redhat.io", "Username: <username> Password: <password>", "podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v <path_to_must-gather>/must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --info log --must-gather-dir-path /must-gather", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --mcp-name=worker-cnf --reserved-cpu-count=4 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=6 > my-performance-profile.yaml", "cat my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-39,48-79 offlined: 42-47 reserved: 0-1,40-41 machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true realTime: true", "oc apply -f my-performance-profile.yaml", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" NTO_IMG=\"registry.redhat.io/openshift4/ose-cluster-node-tuning-operator:v4.11\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Node Tuning Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" && USD{CMD} \"USD{NTO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" || USD{IMG_PULL_CMD} \"USD{NTO_IMG}\" || exit_error \"Node Tuning Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) NTO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{NTO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h", "./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml", "cat my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: false", "oc apply -f my-performance-profile.yaml", "Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]", "Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/cnf-create-performance-profiles
Chapter 5. Customizing developer environments
Chapter 5. Customizing developer environments Red Hat CodeReady Workspaces is an extensible and customizable developer-workspaces platform. You can extend Red Hat CodeReady Workspaces in three different ways: Alternative IDEs provide specialized tools for Red Hat CodeReady Workspaces. For example, a Jupyter notebook for data analysis. Alternate IDEs can be based on Eclipse Theia or any other IDE (web or desktop based). The default IDE in Red Hat CodeReady Workspaces is Che-Theia. Che-Theia plug-ins add capabilities to the Che-Theia IDE. They rely on plug-in APIs that are compatible with Visual Studio Code. The plug-ins are isolated from the IDE itself. They can be packaged as files or as containers to provide their own dependencies. Stacks are pre-configured CodeReady Workspaces workspaces with a dedicated set of tools, which cover different developer personas. For example, it is possible to pre-configure a workbench for a tester with only the tools needed for their purposes. Figure 5.1. CodeReady Workspaces extensibility A user can extend CodeReady Workspaces by using self-hosted mode, which CodeReady Workspaces provides by default. Section 5.1, "What is a Che-Theia plug-in" Section 5.6, "Using alternative IDEs in CodeReady Workspaces" Section 5.2, "Adding a Visual Studio Code extension to a workspace" Section 5.10, "Using private container registries" 5.1. What is a Che-Theia plug-in A Che-Theia plug-in is an extension of the development environment isolated from the IDE. Plug-ins can be packaged as files or containers to provide their own dependencies. Extending Che-Theia using plug-ins can enable the following capabilities: Language support: Extend the supported languages by relying on the Language Server Protocol . Debuggers: Extend debugging capabilities with the Debug Adapter Protocol . Development Tools: Integrate your favorite linters, and as testing and performance tools. Menus, panels, and commands: Add your own items to the IDE components. Themes: Build custom themes, extend the UI, or customize icon themes. Snippets, code formatting, and syntax highlighting: Enhance comfort of use with supported programming languages. Keybindings: Add new keyboard mapping and popular keybindings to make the environment feel natural. 5.1.1. Features and benefits of Che-Theia plug-ins Features Description Benefits Fast Loading Plug-ins are loaded at runtime and are already compiled. IDE is loading the plug-in code. Avoid any compilation time. Avoid post-installation steps. Secure Loading Plug-ins are loaded separately from the IDE. The IDE stays always in a usable state. Plug-ins do not break the whole IDE if it has bugs. Handle network issue. Tools Dependencies Dependencies for the plug-in are packaged with the plug-in in its own container. No-installation for tools. Dependencies running into container. Code Isolation Guarantee that plug-ins cannot block the main functions of the IDE like opening a file or typing Plug-ins are running into separate threads. Avoid dependencies mismatch. Visual Studio Code Extension Compatibility Extend the capabilities of the IDE with existing Visual Studio Code Extensions. Target multiple platform. Allow easy discovery of Visual Studio Code Extension with required installation. 5.1.2. Che-Theia plug-in concept in detail Red Hat CodeReady Workspaces provides a default web IDE for workspaces: Che-Theia. It is based on Eclipse Theia. It is a slightly different version than the plain Eclipse Theia one because there are functionalities that have been added based on the nature of the Red Hat CodeReady Workspaces workspaces. This version of Eclipse Theia for CodeReady Workspaces is called Che-Theia . You can extend the IDE provided with Red Hat CodeReady Workspaces by building a Che-Theia plug-in . Che-Theia plug-ins are compatible with any other Eclipse Theia-based IDE. 5.1.2.1. Client-side and server-side Che-Theia plug-ins The Che-Theia editor plug-ins let you add languages, debuggers, and tools to your installation to support your development workflow. Plug-ins run when the editor completes loading. If a Che-Theia plug-in fails, the main Che-Theia editor continues to work. Che-Theia plug-ins run either on the client side or on the server side. This is a scheme of the client and server-side plug-in concept: Figure 5.2. Client and server-side Che-Theia plug-ins The same Che-Theia plug-in API is exposed to plug-ins running on the client side (Web Worker) or the server side (Node.js). 5.1.2.2. Che-Theia plug-in APIs To provide tool isolation and easy extensibility in Red Hat CodeReady Workspaces, the Che-Theia IDE has a set of plug-in APIs. The APIs are compatible with Visual Studio Code extension APIs. Usually, Che-Theia can run Visual Studio Code extensions as its own plug-ins. When developing a plug-in that depends on or interacts with components of CodeReady Workspaces workspaces (containers, preferences, factories), use the CodeReady Workspaces APIs embedded in Che-Theia. 5.1.2.3. Che-Theia plug-in capabilities Che-Theia plug-ins have the following capabilities: Plug-in Description Repository CodeReady Workspaces Extended Tasks Handles the CodeReady Workspaces commands and provides the ability to start those into a specific container of the workspace. Task plug-in CodeReady Workspaces Extended Terminal Allows to provide terminal for any of the containers of the workspace. Extended Terminal extension CodeReady Workspaces Factory Handles the Red Hat CodeReady Workspaces Factories Workspace plug-in CodeReady Workspaces Container Provides a container view that shows all the containers that are running in the workspace and allows to interact with them. Containers plug-in Dashboard Integrates the IDE with the Dashboard and facilitate the navigation. Che-Theia Dashbord extension CodeReady Workspaces APIs Extends the IDE APIs to allow interacting with CodeReady Workspaces-specific components (workspaces, preferences). Che-Theia API extension 5.1.2.4. Visual Studio Code extensions and Eclipse Theia plug-ins A Che-Theia plug-in can be based on a Visual Studio Code extension or an Eclipse Theia plug-in. A Visual Studio Code extension To repackage a Visual Studio Code extension as a Che-Theia plug-in with its own set of dependencies, package the dependencies into a container. This ensures that Red Hat CodeReady Workspaces users do not need to install the dependencies when using the extension. See Section 5.2, "Adding a Visual Studio Code extension to a workspace" . An Eclipse Theia plug-in You can build a Che-Theia plug-in by implementing an Eclipse Theia plug-in and packaging it to Red Hat CodeReady Workspaces. Additional resources Section 5.1.5, "Embedded and remote Che-Theia plug-ins" 5.1.3. Che-Theia plug-in metadata Che-Theia plug-in metadata is information about individual plug-ins for the plug-in registry. The Che-Theia plug-in metadata, for each specific plug-in, is defined in a meta.yaml file. These files can be referenced in a devfile to include Che-Theia plug-ins in a workspace. Here is an overview of all fields that can be available in plug-in meta YAML files. This document represents the plugin meta YAML structure (version 3) . Table 5.1. meta.yml apiVersion Version 2 and higher where version is 1 supported for backwards compatibility category Available: Category must be set to one of the followings: Editor , Debugger , Formatter , Language , Linter , Snippet , Theme , Other description Short description of the plug-in purpose displayName Name shown in user dashboard deprecate Optional; section for deprecating plug-ins in favor of others * autoMigrate - boolean * migrateTo - new org/plugin-id/version , for example redhat/vscode-apache-camel/latest firstPublicationDate Not required to be in YAML; if it is not included, the plug-in registry dockerimage build generates it latestUpdateDate Not required to be in YAML; if it is not included, the plug-in registry dockerimage build generates it icon URL of an SVG or PNG icon name Name (no spaces allowed), must match [-a-z0-9] publisher Name of the publisher, must match [-a-z0-9] repository URL for plug-in repository, for example, GitHub title Plug-in title (long) type Che Plugin , Visual Studio Code extension version Version information, for example: 7.5.1, [-.a-z0-9] spec Specifications (see below) Table 5.2. spec attributes endpoints Optional; plug-in endpoint containers Optional; sidecar containers for the plug-in. Che Plug-in and Visual Studio Code extension supports only one container initContainers Optional; sidecar init containers for the plug-in workspaceEnv Optional; environment variables for the workspace extensions Optional; Attribute that is required for Visual Studio Code and Che-Theia plug-ins in a form list of URLs to plug-in artefacts, such as .vsix or .theia files Table 5.3. spec.containers. Notice: spec.initContainers has absolutely the same container definition. name Sidecar container name image Absolute or relative container image URL memoryLimit OpenShift memory limit string, for example 512Mi memoryRequest OpenShift memory request string, for example 512Mi cpuLimit OpenShift CPU limit string, for example 2500m cpuRequest OpenShift CPU request string, for example 125m env List of environment variables to set in the sidecar command String array definition of the root process command in the container args String array arguments for the root process command in the container volumes Volumes required by the plug-in ports Ports exposed by the plug-in (on the container) commands Development commands available to the plug-in container mountSources Boolean flag to bound volume with source code /projects to the plug-in container initContainers Optional; init containers for sidecar plug-in Lifecycle Container lifecycle hooks. See lifecycle description Table 5.4. spec.containers.env and spec.initContainers.env attributes. Notice: workspaceEnv has absolutely the same attributes name Environment variable name value Environment variable value Table 5.5. spec.containers.volumes and spec.initContainers.volumes attributes mountPath Path to the volume in the container name Volume name ephemeral If true, the volume is ephemeral, otherwise the volume is persisted Table 5.6. spec.containers.ports and spec.initContainers.ports attributes exposedPort Exposed port Table 5.7. spec.containers.commands and spec.initContainers.commands attributes name Command name workingDir Command working directory command String array that defines the development command Table 5.8. spec.endpoints attributes name Name (no spaces allowed), must match [-a-z0-9] public true , false targetPort Target port attributes Endpoint attributes Table 5.9. spec.endpoints.attributes attributes protocol Protocol, example: ws type ide , ide-dev discoverable true , false secure true , false . If true , then the endpoint is assumed to listen solely on 127.0.0.1 and is exposed using a JWT proxy cookiesAuthEnabled true , false requireSubdomain true , false . If true , the endpoint is exposed on subdomain in single-host mode. Table 5.10. spec.containers.lifecycle and spec.initContainers.lifecycle attributes postStart The postStart event that runs immediately after a Container is started. See postStart and preStop handlers * exec : Executes a specific command, resources consumed by the command are counted against the Container * command : ["/bin/sh", "-c", "/bin/post-start.sh"] preStop The preStop event that runs before a Container is terminated. See postStart and preStop handlers * exec : Executes a specific command, resources consumed by the command are counted against the Container * command : ["/bin/sh", "-c", "/bin/post-start.sh"] Example meta.yaml for a Che-Theia plug-in: the CodeReady Workspaces machine-exec Service apiVersion: v2 publisher: eclipse name: che-machine-exec-plugin version: 7.9.2 type: Che Plugin displayName: CodeReady Workspaces machine-exec Service title: Che machine-exec Service Plugin description: CodeReady Workspaces Plug-in with che-machine-exec service to provide creation terminal or tasks for Eclipse CHE workspace containers. icon: https://www.eclipse.org/che/images/logo-eclipseche.svg repository: https://github.com/eclipse-che/che-machine-exec/ firstPublicationDate: "2020-03-18" category: Other spec: endpoints: - name: "che-machine-exec" public: true targetPort: 4444 attributes: protocol: ws type: terminal discoverable: false secure: true cookiesAuthEnabled: true containers: - name: che-machine-exec image: "quay.io/eclipse/che-machine-exec:7.9.2" ports: - exposedPort: 4444 command: ['/go/bin/che-machine-exec', '--static', '/cloud-shell', '--url', '127.0.0.1:4444'] Example meta.yaml for a Visual Studio Code extension: the AsciiDoc support extension apiVersion: v2 category: Language description: This extension provides a live preview, syntax highlighting and snippets for the AsciiDoc format using Asciidoctor flavor displayName: AsciiDoc support firstPublicationDate: "2019-12-02" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: vscode-asciidoctor publisher: joaompinto repository: https://github.com/asciidoctor/asciidoctor-vscode title: AsciiDoctor Plug-in type: Visual Studio Code extension version: 2.7.7 spec: extensions: - https://github.com/asciidoctor/asciidoctor-vscode/releases/download/v2.7.7/asciidoctor-vscode-2.7.7.vsix 5.1.4. Che-Theia plug-in lifecycle Every time a user starts a Che workspace, a Che-Theia plug-in life cycle process starts. The steps of this process are as follows: CodeReady Workspaces server checks for plug-ins to start from the workspace definition. CodeReady Workspaces server retrieves plug-in metadata, recognizes each plug-in type, and stores them in memory. CodeReady Workspaces server selects a broker according to the plug-in type. The broker processes the installation and deployment of the plug-in. The installation process of the plug-in differs for each specific broker. Note Plug-ins exist in various types. A broker ensures the success of a plug-in deployment by meeting all installation requirements. Figure 5.3. Che-Theia plug-in lifecycle Before a CodeReady Workspaces workspace is launched, CodeReady Workspaces server starts the workspace containers: The Che-Theia plug-in broker extracts the information about sidecar containers that a particular plug-in needs from the .theia file. The broker sends the appropriate container information to CodeReady Workspaces server. The broker copies the Che-Theia plug-in to a volume to have it available for the Che-Theia editor container. CodeReady Workspaces server then starts all the containers of the workspace. Che-Theia starts in its container and checks the correct folder to load the plug-ins. A user experience with Che-Theia plug-in lifecycle When a user opens a browser tab with Che-Theia, Che-Theia starts a new plug-in session with: Web Worker for frontend Node.js for backend Che-Theia notifies all Che-Theia plug-ins with the start of the new session by calling the start() function for each triggered plug-in. A Che-Theia plug-in session runs and interacts with the Che-Theia backend and frontend. When the user closes the Che-Theia browser tab, or the session ended on a timeout limit, Che-Theia notifies all plug-ins with the stop() function for each triggered plug-in. 5.1.5. Embedded and remote Che-Theia plug-ins Developer workspaces in Red Hat CodeReady Workspaces provide all dependencies needed to work on a project. The application includes the dependencies needed by all the tools and plug-ins used. Based on the required dependencies, Che-Theia plug-in can run as: Embedded, also know as local Remote 5.1.5.1. Embedded (local) plug-ins The Embedded plug-ins are plug-ins without specific dependencies that are injected into the Che-Theia IDE. These plug-ins use the Node.js runtime, which runs in the IDE container. Examples: Code linting New set of commands New UI components To include a Che-Theia plug-in or Visual Studio Code extension, define a URL to the plug-in .theia archive binary in the meta.yaml file. See Section 5.2, "Adding a Visual Studio Code extension to a workspace" When starting a workspace, CodeReady Workspaces downloads and unpacks the plug-in binaries and includes them in the Che-Theia editor container. The Che-Theia editor initializes the plug-ins when it starts. 5.1.5.2. Remote plug-ins The plug-in relies on dependencies or it has a back end. It runs in its own sidecar container, and all dependencies are packaged in the container. A remote Che-Theia plug-in consist of two parts: Che-Theia plug-in or Visual Studio Code extension binaries. The definition in the meta.yaml file is the same as for embedded plug-ins. Container image definition, for example, eclipse/che-theia-dev:nightly . From this image, CodeReady Workspaces creates a separate container inside a workspace. Examples: Java Language Server Python Language Server When starting a workspace, CodeReady Workspaces creates a container from the plug-in image, downloads and unpacks the plug-in binaries, and includes them in the created container. The Che-Theia editor connects to the remote plug-ins when it starts. 5.1.5.3. Comparison matrix Embedded plug-ins are those Che-Theia plug-ins or Visual Studio Code extensions that do not require extra dependencies inside its container. Remote plug-ins are containers that contain a plug-in with all required dependencies. Table 5.11. Che-Theia plug-in comparison matrix: embedded compared to remote Configure RAM per plug-in Environment dependencies Create separated container Remote TRUE Plug-in uses dependencies defined in the remote container. TRUE Embedded FALSE (users can configure RAM for the whole editor container, but not per plug-in) Plug-in uses dependencies from the editor container; if container does not include these dependencies, the plug-in fails or does not function as expected. FALSE Depending on your use case and the capabilities provided by your plug-in, select one of the described running modes. 5.1.6. Remote plug-in endpoint Red Hat CodeReady Workspaces has a remote plug-in endpoint service to start Visual Studio Code Extensions and Che-Theia plug-ins in separate containers. Red Hat CodeReady Workspaces injects the remote plug-in endpoint binaries into each remote plug-in container. This service starts remote extensions and plug-ins defined in the plug-in meta.yaml file and connects them to the Che-Theia editor container. The remote plug-in endpoint creates a plug-in API proxy between the remote plug-in container and the Che-Theia editor container. The remote plug-in endpoint is also an interceptor for some plug-in API parts, which it launches inside a remote sidecar container rather than an editor container. Examples: terminal API, debug API. The remote plug-in endpoint executable command is stored in the environment variable of the remote plug-in container: PLUGIN_REMOTE_ENDPOINT_EXECUTABLE . Red Hat CodeReady Workspaces provides two ways to start the remote plug-in endpoint with a sidecar image: Defining a launch remote plug-in endpoint using a Dockerfile. To use this method, patch an original image and rebuild it. Defining a launch remote plug-in endpoint in the plug-in meta.yaml file. Use this method to avoid patching an original image. 5.1.6.1. Defining a launch remote plug-in endpoint using Dockerfile To start a remote plug-in endpoint, set the PLUGIN_REMOTE_ENDPOINT_EXECUTABLE environment variable in the Dockerfile. Procedure Start a remote plug-in endpoint using the CMD command in the Dockerfile: Dockerfile example Start a remote plug-in endpoint using the ENTRYPOINT command in the Dockerfile: Dockerfile example 5.1.6.1.1. Using a wrapper script Some images use a wrapper script to configure permissions inside the container. The Dockertfile ENTRYPOINT command defines this script, which executes the main process defined in the CMD command of the Dockerfile. CodeReady Workspaces uses images with a wrapper script to provide permission configurations to different infrastructures protected by advanced security. OpenShift Container Platform is an example of such an infrastructure. Example of a wrapper script: #!/bin/sh set -e export USER_ID=USD(id -u) export GROUP_ID=USD(id -g) if ! whoami >/dev/null 2>&1; then echo "USD{USER_NAME:-user}:x:USD{USER_ID}:0:USD{USER_NAME:-user} user:USD{HOME}:/bin/sh" >> /etc/passwd fi # Grant access to projects volume in case of non root user with sudo rights if [ "USD{USER_ID}" -ne 0 ] && command -v sudo >/dev/null 2>&1 && sudo -n true > /dev/null 2>&1; then sudo chown "USD{USER_ID}:USD{GROUP_ID}" /projects fi exec "USD@" Example of a Dockerfile with a wrapper script: Dockerfile example Explanation: The container launches the /entrypoint.sh script defined in the ENTRYPOINT command of the Dockerfile. The script configures the permissions and executes the command using exec USD@ . CMD is the argument for ENTRYPOINT , and the exec USD@ command calls USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} . The remote plug-in endpoint then starts in the container after permission configuration. 5.1.6.2. Defining a launch remote plug-in endpoint in a meta.yaml file Use this method to re-use images for starting a remote plug-in endpoint without any modifications. Procedure Modify the plug-in meta.yaml file properties command and args : command - CodeReady Workspaces uses the command properties to override the Dockerfile#ENTRYPOINT value. args - CodeReady Workspaces uses uses the args properties to override the Dockerfile#CMD value. Example of a YAML file with the command and args properties modified: apiVersion: v2 category: Language description: "Typescript language features" displayName: Typescript firstPublicationDate: "2019-10-28" icon: "https://www.eclipse.org/che/images/logo-eclipseche.svg" name: typescript publisher: che-incubator repository: "https://github.com/Microsoft/vscode" title: "Typescript language features" type: "Visual Studio Code extension" version: remote-bin-with-override-entrypoint spec: containers: - image: "example/fedora-for-ts-remote-plugin-without-endpoint:latest" memoryLimit: 512Mi name: vscode-typescript command: - sh - -c args: - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - "https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix" Modify args instead of command to use an image with a wrapper script pattern and to keep a call of the entrypoint.sh script: apiVersion: v2 category: Language description: "Typescript language features" displayName: Typescript firstPublicationDate: "2019-10-28" icon: "https://www.eclipse.org/che/images/logo-eclipseche.svg" name: typescript publisher: che-incubator repository: "https://github.com/Microsoft/vscode" title: "Typescript language features" type: "Visual Studio Code extension" version: remote-bin-with-override-entrypoint spec: containers: - image: "example/fedora-for-ts-remote-plugin-without-endpoint:latest" memoryLimit: 512Mi name: vscode-typescript args: - sh - -c - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - "https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix" Red Hat CodeReady Workspaces calls the entrypoint.sh wrapper script defined in the ENTRYPOINT command of the Dockerfile. The script executes [ 'sh', '-c", ' USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}' ] using the exec "USD@" command. Note By modifying the command and args properties of the meta.yaml file, a user can: Execute a service at a container start Start a remote plug-in endpoint To make these actions run at the same time: Start the service. Detach the process. Start the remote plug-in endpoint. 5.2. Adding a Visual Studio Code extension to a workspace This section describes how to add a Visual Studio Code extension to a workspace using the workspace configuration. Prerequisites The Visual Studio Code extension is available in the CodeReady Workspaces plug-in registry, or metadata for the Visual Studio Code extension are available. See Section 5.4, "Publishing metadata for a Visual Studio Code extension" . 5.2.1. Adding a Visual Studio Code extension using the workspace configuration Prerequisites A running instance of CodeReady Workspaces. To install an instance of CodeReady Workspaces, see Installing CodeReady Workspaces . An existing workspace defined on this instance of CodeReady Workspaces. The Visual Studio Code extension is available in the CodeReady Workspaces plug-in registry, or metadata for the Visual Studio Code extension are available. See Section 5.4, "Publishing metadata for a Visual Studio Code extension" . Procedure To add a Visual Studio Code extension using the workspace configuration: Click the Workspaces tab on the Dashboard and select the plug-in destination workspace. The Workspace <workspace-name> window is opened showing the details of the workspace. Click the devfile tab. Locate the components section, and add a new entry with the following structure: - type: chePlugin id: 1 1 ID format: <publisher>/<plug-inName>/<plug-inVersion> CodeReady Workspaces automatically adds the other fields to the new component. Alternatively, you can link to a meta.yaml file hosted on GitHub, using the dedicated reference field. - type: chePlugin reference: 1 1 https://raw.githubusercontent.com/ <username> / <registryRepository> /v3/plugins/ <publisher> / <plug-inName> / <plug-inVersion> /meta.yaml Restart the workspace for the changes to take effect. 5.2.2. Adding a Visual Studio Code extension using recommendations Prerequisites A running instance of CodeReady Workspaces. To install an instance of CodeReady Workspaces, see Installing CodeReady Workspaces . Featured Visual Studio Code extensions are available in the CodeReady Workspaces plug-in registry. Procedure Open a workspace without any existing devfile using the CodeReady Workspaces dashboard : The recommendations plug-in will scan files, discover languages and install Visual Studio Code extensions matching these languages. Disable this feature by setting extensions.ignoreRecommendations to true in the devfile attributes. The recommendations plug-in can suggest Visual Studio Code extensions to install when opening files. It suggests extensions based on the workspace content, allowing the user to work with the given files. Enable this feature by setting extensions.openFileRecommendations to true in the devfile attributes. 5.3. Adding a Visual Studio Code extension to the CodeReady Workspaces plug-ins registry To use a Visual Studio Code extension in a CodeReady Workspaces workspace, CodeReady Workspaces need to consume metadata describing the extension. The CodeReady Workspaces plug-ins registry is a static website publishing metadata for common Visual Studio Code extensions. Visual Studio Code extension metadata for the CodeReady Workspaces plug-ins registry is generated from a central file named che-theia-plugins.yaml . To add or modify an extension in the CodeReady Workspaces plug-ins registry, edit the che-theia-plugins.yaml file and add relevant metadata. Note This article describes the steps needed to build the plug-ins registry with a custom plug-in definition. If you are looking to create a custom meta.yaml file that can be directly referenced in a devfile, see Section 5.4, "Publishing metadata for a Visual Studio Code extension" . Prerequisite A working knowledge of customizing the registries, see https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#customizing-the-registries.adoc A link to a sidecar container image, should the Visual Studio Code extension require one. Procedure Edit the che-theia-plugins.yaml file and create a new entry. - id: publisher/my-vscode-ext 1 repository: 2 url: https://github.com/publisher/my-vscode-ext 3 revision: 1.7.2 4 aliases: 5 - publisher/my-vscode-ext-revised preferences: 6 asciidoc.use_asciidoctorpdf: true shellcheck.executablePath: /bin/shellcheck solargraph.bundlerPath: /usr/local/bin/bundle solargraph.commandPath: /usr/local/bundle/bin/solargraph sidecar: 7 image: quay.io/repository/eclipse/che-plugin-sidecar:sonarlint-2fcf341 8 name: my-vscode-ext-sidecar 9 memoryLimit: "1500Mi" 10 memoryRequest: "1000Mi" 11 cpuLimit: "500m" 12 cpuRequest: "125m" 13 command: 14 - /bin/sh args: 15 - "-c" - "./entrypoint.sh" volumeMounts: 16 - name: vscode-ext-volume 17 path: "/home/theia/my-vscode-ext" 18 endpoints: 19 - name: "configuration-endpoint" 20 public: true 21 targetPort: 61436 22 attributes: 23 protocol: http extension: https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix 24 skipDependencies: 25 - id-of/extension1 - id-of/extension2 extraDependencies: 26 - id-of/extension1 - id-of/extension2 metaYaml: skipIndex: <true|false> 27 skipDependencies: 28 - id-of/extension1 - id-of/extension2 extraDependencies: 29 - id-of/extension1 - id-of/extension2 1 (OPTIONAL) The ID of the plug-in, useful if a plug-in has multiple entries for one repository. For example, Java 8 and Java 11. 2 Repository information about the plug-in. If ID is specified, then this field is not a list element. 3 The URL to the Git repository of the extension. 4 Tag or SHA1 ID of the upstream repository that hosts the extension, corresponding to a version, snapshot, or release. 5 (OPTIONAL) An alias for this plug-in. For anything listed here, a meta.yaml file is generated. 6 (OPTIONAL) Plug-in preferences in freeform format. 7 (OPTIONAL) If the plug-in runs in a sidecar container, then the sidecar information is specified here. 8 A location of a container image to be used as the plug-in sidecar. This line cannot be specified concurrently with directory . See above. 9 (OPTIONAL) The name of the sidecar container. 10 (OPTIONAL) The memory limit of the sidecar container. 11 (OPTIONAL) The memory request of the sidecar container. 12 (OPTIONAL) The CPU limit of the sidecar container. 13 (OPTIONAL) The CPU request of the sidecar container. 14 (OPTIONAL) Definitions of root process commands inside the container. 15 (OPTIONAL) Arguments for root process commands inside the container. 16 (OPTIONAL) Any volume mounting information for the sidecar container. 17 The name of the mount. 18 The path to the mount. 19 (OPTIONAL) Any endpoint information for the sidecar container. 20 Endpoint name. 21 A Boolean value determining whether the endpoint is exposed publicly. 22 The port number. 23 Attributes relating to the endpoint. 24 Direct link or links to the vsix files included with the plug-in. The vsix built by the repository specified, such as the main extension, must be listed first. 25 # TODO # 26 (OPTIONAL) Extra dependencies in addition to the one listed in extensionDependencies field of package.json. 27 (OPTIONAL) Do not include this plug-in in index.json if true. Useful in case of dependencies that you do not want to expose as standalone plug-ins. 28 (OPTIONAL) Do not examine specified dependencies from extensionDependencies field of package.json (only for meta.yaml generation). 29 (OPTIONAL) Extra dependencies in addition to the one listed in extensionDependencies field of package.json (only for meta.yaml generation). Run the build.sh script with the options of your choosing. The build process will generate meta.yaml files automatically, based on the entries in the che-theia-plugins.yaml file. Use the resulting plug-ins registry image in CodeReady Workspaces, or copy the meta.yaml file out of the registry container and reference it directly as an HTTP resource. 5.4. Publishing metadata for a Visual Studio Code extension To use a Visual Studio Code extension in a CodeReady Workspaces workspace, CodeReady Workspaces needs to consume metadata describing the extension. The CodeReady Workspaces plug-ins registry is a static website publishing metadata for common Visual Studio Code extensions. This article describes how to publish metadata for an additional extension, not available in the CodeReady Workspaces plug-ins registry, by using the extension configuration meta.yaml file. For details on adding a plug-in to an existing plug-in registry, see Section 5.3, "Adding a Visual Studio Code extension to the CodeReady Workspaces plug-ins registry" Prerequisite If the Visual Studio Code extension requires it, the required associated container image is available. Procedure Create a meta.yaml file. Edit the meta.yaml file and provide the necessary information. The file must have the following structure: apiVersion: v2 1 publisher: myorg 2 name: my-vscode-ext 3 version: 1.7.2 4 type: value 5 displayName: 6 title: 7 description: 8 icon: https://www.eclipse.org/che/images/logo-eclipseche.svg 9 repository: 10 category: 11 spec: containers: 12 - image: 13 memoryLimit: 14 memoryRequest: 15 cpuLimit: 16 cpuRequest: 17 extensions: 18 - https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix - https://github.com/SonarSource/sonarlint-vscode/releases/download/1.16.0/sonarlint-vscode-1.16.0.vsix 1 Version of the file structure. 2 Name of the plug-in publisher. Must be the same as the publisher in the path. 3 Name of the plug-in. Must be the same as in path. 4 Version of the plug-in. Must be the same as in path. 5 Type of the plug-in. Possible values: Che Plugin , Che Editor , Theia plugin , Visual Studio Code extension . 6 A short name of the plug-in. 7 Title of the plug-in. 8 A brief explanation of the plug-in and what it does. 9 The link to the plug-in logo. 10 Optional. The link to the source-code repository of the plug-in. 11 Defines the category that this plug-in belongs to. Should be one of the following: Editor , Debugger , Formatter , Language , Linter , Snippet , Theme , or Other . 12 If this section is omitted, the Visual Studio Code extension is added into the Che-Theia IDE container. 13 The Docker image from which the sidecar container will be started. Example: codeready-workspaces/theia-endpoint-rhel8 . 14 The maximum RAM which is available for the sidecar container. Example: "512Mi". This value might be overridden by the user in the component configuration. 15 The RAM which is given for the sidecar container by default. Example: "256Mi". This value might be overridden by the user in the component configuration. 16 The maximum CPU amount in cores or millicores (suffixed with "m") which is available for the sidecar container. Examples: "500m", "2". This value might be overridden by the user in the component configuration. 17 The CPU amount in cores or millicores (suffixed with "m") which is given for the sidecar container by default. Example: "125m". This value might be overridden by the user in the component configuration. 18 A list of Visual Studio Code extensions run in this sidecar container. Publish the meta.yaml file as an HTTP resource by creating a gist on GitHub or GitLab with a file content published there. 5.5. Testing a Visual Studio Code extension in CodeReady Workspaces Visual Studio Code (Visual Studio Code) extensions work in a workspace. Visual Studio Code extensions can run in the Che-Theia editor container, or in their own isolated and preconfigured containers with their prerequisites. This section describes how to test a Visual Studio Code extension in CodeReady Workspaces with workspaces and how to review the compatibility of Visual Studio Code extensions to check whether a specific API is available. Note The extension-hosting sidecar container and the use of the extension in a devfile are optional. 5.5.1. Testing a Visual Studio Code extension using GitHub gist Each workspace can have its own set of plug-ins. The list of plug-ins and the list of projects to clone are defined in the devfile.yaml file. For example, to enable an AsciiDoc plug-in from the Red Hat CodeReady Workspaces dashboard, add the following snippet to the devfile: components: - id: joaopinto/vscode-asciidoctor/latest type: chePlugin To add a plug-in that is not in the default plug-in registry, build a custom plug-in registry. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/administration_guide/index#customizing-the-registries.adoc , or, alternatively, use GitHub and the gist service. Prerequisites A running instance of CodeReady Workspaces. To install an instance of CodeReady Workspaces, see Installing CodeReady Workspaces . A GitHub account. Procedure Go to the gist webpage and create a README.md file with the following description: Try Bracket Pair Colorizer extension in Red Hat CodeReady Workspaces and content: Example Visual Studio Code extension . ( Bracket Pair Colorizer is a popular Visual Studio Code extension.) Click the Create secret gist button. Clone the gist repository by using the URL from the navigation bar of the browser: Example of the output of the git clone command 1 Each gist has a unique ID. Change the directory: 1 Directory name matching the gist ID. Download the plug-in from the Visual Studio Code marketplace or from its GitHub page , and store the plug-in file in the cloned directory. Create a plugin.yaml file in the cloned directory to add the definition of this plug-in. Example of the plugin.yaml file referencing the .vsix binary file extension apiVersion: v2 publisher: CoenraadS name: bracket-pair-colorizer version: 1.0.61 type: Visual Studio Code extension displayName: Bracket Pair Colorizer title: Bracket Pair Colorizer description: Bracket Pair Colorizer icon: https://raw.githubusercontent.com/redhat-developer/codeready-workspaces/crw-2-rhel-8/dependencies/che-plugin-registry/resources/images/default.svg?sanitize=true repository: https://github.com/CoenraadS/BracketPair category: Language firstPublicationDate: '2020-07-30' spec: 1 extensions: - "{{REPOSITORY}}/CoenraadS.bracket-pair-colorizer-1.0.61.vsix" 2 latestUpdateDate: "2020-07-30" 1 This extension requires a basic Node.js runtime, so it is not necessary to add a custom runtime image in plugin.yaml . 2 {{REPOSITORY}} is a macro for a pre-commit hook. Define a memory limit and volumes: spec: containers: - image: "quay.io/eclipse/che-sidecar-java:8-0cfbacb" name: vscode-java memoryLimit: "1500Mi" volumes: - mountPath: "/home/theia/.m2" name: m2 Create a devfile.yaml that references the plugin.yaml file: apiVersion: 1.0.0 metadata: generateName: java-maven- projects: - name: console-java-simple source: type: git location: "https://github.com/che-samples/console-java-simple.git" branch: java1.11 components: - type: chePlugin id: redhat/java11/latest - type: chePlugin 1 reference: "{{REPOSITORY}}/plugin.yaml" - type: dockerimage alias: maven image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512Mi mountSources: true volumes: - name: m2 containerPath: /home/user/.m2 commands: - name: maven build actions: - type: exec component: maven command: "mvn clean install" workdir: USD{CHE_PROJECTS_ROOT}/console-java-simple - name: maven build and run actions: - type: exec component: maven command: "mvn clean install && java -jar ./target/*.jar" workdir: USD{CHE_PROJECTS_ROOT}/console-java-simple 1 Any other devfile definition is also accepted. The important information in this devfile are the lines defining this external component. It means that an external reference defines the plug-in and not an ID, which pointing to a definition in the default plug-in registry. Verify there are 4 files in the current Git directory: Before committing the files, add a pre-commit hook to update the {{REPOSITORY}} variable to the public external raw gist link: Create a .git/hooks/pre-commit file with this content: #!/bin/sh # get modified files FILES=USD(git diff --cached --name-only --diff-filter=ACMR "*.yaml" | sed 's| |\\ |g') # exit fast if no files found [ -z "USDFILES" ] && exit 0 # grab remote origin origin=USD(git config --get remote.origin.url) url="USD{origin}/raw" # iterate on files and add the good prefix pattern for FILE in USD{FILES}; do sed -e "s#{{REPOSITORY}}#USD{url}#g" "USD{FILE}" > "USD{FILE}.back" mv "USD{FILE}.back" "USD{FILE}" done # Add back to staging echo "USDFILES" | xargs git add exit 0 The hook replaces the {{REPOSITORY}} macro and adds the external raw link to the gist. Make the script executable: Commit and push the files: Visit the gist website and verify that all links have the correct public URL and do not contain any {{REPOSITORY}} variables. To reach the devfile: or: 5.5.2. Verifying the Visual Studio Code extension API compatibility level Che-Theia does not fully support the Visual Studio Code extensions API. The vscode-theia-comparator is used to analyze the compatibility between the Che-Theia plug-in API and the Visual Studio Code extension API. This tool runs nightly, and the results are published on the vscode-theia-comparator GitHub page. Prerequisites Personal GitHub access token. See Creating a personal access token for the command line . A GitHub access token is required to increase the GitHub download limit for your IP address. Procedure To run the vscode-theia comparator manually: Clone the vscode-theia-comparator repository, and build it using the yarn command. Set the GITHUB_TOKEN environment variable to your token. Execute the yarn run generate command to generate a report. Open the out/status.html file to view the report. 5.6. Using alternative IDEs in CodeReady Workspaces Red Hat CodeReady Workspaces provides a default web IDE to use in the developer workspaces. To use another editor, see: Section 5.7, "Configuring a workspace to use an IDE based on the IntelliJ Platform" Section 5.8, "Theia-based IDEs" 5.7. Configuring a workspace to use an IDE based on the IntelliJ Platform This section describes how to configure a workspace to use an IDE based on the IntelliJ Platform . No initial repository checkout when running the CodeReady Workspaces Server workspaces engine When the CodeReady Workspaces instance is running the CodeReady Workspaces Server workspaces engine, the workspace starts without an initial checkout of the code repositories referenced in the devfile. Workarounds In the IDE, click Get from VCS to checkout a repository. To enable the automatic initial checkout of the code repositories in the devfile, use the Dev Workspace operator. 5.7.1. Configuring a workspace to use IntelliJ IDEA Community This section describes how to configure a workspace devfile to use IntelliJ IDEA Community. Procedure Add the following component to the workspace devfile: components: - type: cheEditor id: registry.redhat.io/codeready-workspaces/idea-rhel8:2.15 Remove the plugins or commands defined for the Theia IDE from the workspace devfile. Restart the workspace. 5.7.2. Configuring a workspace to use PyCharm Community This section describes how to configure a workspace devfile to use PyCharm Community. Procedure Add the following component to the workspace devfile: components: - type: cheEditor reference: https://raw.githubusercontent.com/che-incubator/jetbrains-editor-images/meta/che-pycharm/latest.meta.yaml Remove the plugins or commands defined for the Theia IDE from the workspace devfile. Restart the workspace. 5.7.3. Configuring a workspace to use a custom image with an IDE based on the IntelliJ Platform This section describes how to configure a workspace to use an IDE based on the IntelliJ Platform. Prerequisites CodeReady Workspaces has access to metadata and image with the desired IDE based on the IntelliJ Platform. See Section 5.7.4, "Building images for IDEs based on the IntelliJ Platform" . Procedure Add the following component to the workspace devfile: components: - type: cheEditor reference: " <URL_to_meta.yaml> " 1 1 <URL_to_meta.yaml> : HTTPS resource defining the IDE metadata, see Section 5.7.4, "Building images for IDEs based on the IntelliJ Platform" . Remove the plugins or commands defined for the Theia IDE from the workspace devfile. Restart the workspace. 5.7.4. Building images for IDEs based on the IntelliJ Platform This section describes how to build images for IDEs based on the IntelliJ Platform version 2020.3 . 5.7.4.1. Building an image for IntelliJ IDEA Community or PyCharm Community This procedure describes how to build an image for IntelliJ IDEA Community or PyCharm Community. Prerequisites The build host has at least 2 GB of available RAM. The following tools are installed on the build host: Docker version 18.09 or greater, supporting BuildKit Git GNU getopt GNU wget Java Development Kit (JDK) version 11 jq Procedure Get a local copy of the JetBrains Projector Editor Images repository . Run the build script and select the IDE package and package version: To test the image, run it locally and go to http://localhost:8887 to access the IDE. Publish the image to a registry accessible by CodeReady Workspaces, and remember the location: <registry>/<image>:<tag> . Create a meta.yaml file with the following content: apiVersion: v2 publisher: <publisher> 1 name: intellij-ide version: latest type: Che Editor displayName: IntelliJ Platform IDE title: IDE based on the IntelliJ Platform description: IDE based on the IntelliJ Platform running using Projector icon: https://www.jetbrains.com/apple-touch-icon.png category: Editor repository: https://github.com/che-incubator/jetbrains-editor-images firstPublicationDate: "2021-04-10" spec: endpoints: - name: intellij public: true targetPort: 8887 attributes: protocol: http type: ide path: /projector-client/index.html?backgroundColor=434343&wss containers: - name: intellij-ide image: <registry>/<image>:<tag> 2 mountSources: true volumes: - mountPath: "/home/projector-user" name: projector-user ports: - exposedPort: 8887 memoryLimit: "4096M" 1 <publisher> : Your publisher name. 2 <registry>/<image>:<tag> : Location of the IDE image in a registry accessible by CodeReady Workspaces. Publish the meta.yaml file to an HTTPS resource accessible by CodeReady Workspaces and copy the resulting URL for use as <URL_to_meta.yaml> when configuring a workspace to use this IDE. steps Section 5.7, "Configuring a workspace to use an IDE based on the IntelliJ Platform" 5.7.4.2. Building an image for an IDE based on the IntelliJ Platform This procedure describes how to build an image for an IDE based on the IntelliJ Platform version 2020.3 . For JetBrains IDEs, the IDE version number corresponds to the version of the IntelliJ Platform. See the list of compatible IDEs . Prerequisites The build host has at least 2 GB of available RAM. The following tools are installed on the build host: Docker version 18.09 or greater, supporting BuildKit Git GNU getopt GNU wget Java Development Kit (JDK) version 11 jq Procedure Get a local copy of the JetBrains Projector Editor Images repository . Run the build script with parameters: --tag <tag> The name and tag to apply to the image after build in name:tag format. --url <url> The URL pointing to an archive of the IDE based on the IntelliJ Platform version 2020.3 . The archive must target the Linux platform, be in tar.gz format, and include JetBrains Runtime (JBR). Example 5.1. Building the image with IntelliJ IDEA Community 2020.3.3 Example 5.2. Building the image with PyCharm Community 2020.3.5 Example 5.3. Building the image with WebStorm 2020.3.3 Example 5.4. Building the image with IntelliJ IDEA Ultimate 2020.2.2 Example 5.5. Building the image with Android Studio 4.2.0.22 To test the image, run it locally and go to http://localhost:8887 to access the IDE. Example 5.6. Testing the image with IntelliJ IDEA Community 2020.3.3 Example 5.7. Testing the image with PyCharm 2020.3.5 Example 5.8. Testing the image with WebStorm 2020.3.3 Example 5.9. Testing the image with IntelliJ IDEA Ultimate 2020.2.2 Example 5.10. Testing the image with Android Studio 4.2.0.22 Publish the image to a registry accessible by CodeReady Workspaces, and remember the location: <registry>/<image>:<tag> . Create a meta.yaml file containing the IDE metadata for CodeReady Workspaces: apiVersion: v2 publisher: <publisher> 1 name: intellij-ide version: latest type: Che Editor displayName: IntelliJ Platform IDE title: IDE based on the IntelliJ Platform description: IDE based on the IntelliJ Platform running using Projector icon: https://www.jetbrains.com/apple-touch-icon.png category: Editor repository: https://github.com/che-incubator/jetbrains-editor-images firstPublicationDate: "2021-04-10" spec: endpoints: - name: intellij public: true targetPort: 8887 attributes: protocol: http type: ide path: /projector-client/index.html?backgroundColor=434343&wss containers: - name: intellij-ide image: <registry>/<image>:<tag> 2 mountSources: true volumes: - mountPath: "/home/projector-user" name: projector-user ports: - exposedPort: 8887 memoryLimit: "4096M" 1 <publisher> : Your publisher name. 2 <registry>/<image>:<tag> : Location of the IDE image in a registry accessible by CodeReady Workspaces. Publish the meta.yaml file to an HTTPS resource accessible by CodeReady Workspaces and copy the resulting URL for use as <URL_to_meta.yaml> when configuring a workspace to use this IDE. steps Section 5.7, "Configuring a workspace to use an IDE based on the IntelliJ Platform" 5.7.5. Provisioning the JetBrains offline activation code Some editions of JetBrains IDEs require a paid subscription beyond the evaluation period, which means buying a license from JetBrains. To register a license, you need to provision to CodeReady Workspaces the JetBrains activation code for offline usage. When you renew your subscription, you will need to generate and provision a new offline activation code. Prerequisites An active JetBrains subscription associated to an active JetBrains account . The OpenSSL and oc tools are installed. An image containing the IDE. See Section 5.7.4, "Building images for IDEs based on the IntelliJ Platform" . A workspace running with the IDE. See Section 5.7, "Configuring a workspace to use an IDE based on the IntelliJ Platform" . Procedure Log in to your JetBrains account , choose the desired subscription, and click on the Download activation code for offline usage link. Extract from the downloaded zip archive the file named <License ID> - for 2018.1 or later.txt . Convert the activation code to a base64 encoded single line for use in the step as <base64_encoded_activation_code> . Create a secret.yaml file defining the OpenShift Secret to provision the activation code to CodeReady Workspaces. apiVersion: v1 kind: Secret metadata: name: jetbrains-offline-activation-code labels: app.kubernetes.io/component: workspace-secret app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/automount-workspace-secret: 'false' 1 che.eclipse.org/mount-path: /tmp/ che.eclipse.org/mount-as: file data: idea.key: <base64_encoded_activation_code> pycharm.key: <base64_encoded_activation_code> webstorm.key: <base64_encoded_activation_code> phpstorm.key: <base64_encoded_activation_code> goland.key: <base64_encoded_activation_code> 1 che.eclipse.org/automount-workspace-secret: 'false' : disables the mounting process until a workspace component explicitly requests it with the automountWorkspaceSecrets: true property. Apply the OpenShift Secret to the OpenShift project running the workspace. To mount the activation codes into a workspace, update the workspace devfile configuration to set automountWorkspaceSecrets: true . components: - type: cheEditor automountWorkspaceSecrets: true reference: " <URL_to_meta.yaml> " Restart the workspace. 5.8. Theia-based IDEs This section describes how to provide a custom IDE, based on Eclipse Theia framework. To use a Theia-based IDE in Red Hat CodeReady Workspaces as an editor, you need to prepare two main components: a Docker image containing your IDE the Che editor descriptor file - meta.yaml Procedure Describe the IDE with an editor descriptor - meta.yaml file: version: 1.0.0 editors: - id: eclipse/che-theia/ title: Eclipse Theia development version. displayName: theia-ide description: Eclipse Theia, get the latest release each day. icon: https://raw.githubusercontent.com/theia-ide/theia/master/logo/theia-logo-no-text-black.svg?sanitize=true repository: https://github.com/eclipse-che/che-theia firstPublicationDate: "2021-01-01" endpoints: - name: "theia" public: true targetPort: 3100 attributes: protocol: http type: ide secure: true cookiesAuthEnabled: true discoverable: false containers: - name: theia-ide image: "<your-ide-image>" mountSources: true ports: - exposedPort: 3100 memoryLimit: "512M" targetPort and exposedPort must be the same as the Theia-based IDE running inside the container. Replace <your-ide-image> with the name of the IDE image. The meta.yaml file should be publicly accessible through an HTTP(S) link. Add your editor to a Devfile: apiVersion: 1.0.0 metadata: name: che-theia-based-ide components: - type: cheEditor reference: '<meta.yaml URL>' <meta.yaml URL> should point to the publicly hosted meta.yaml file described in the step. 5.9. Adding tools to CodeReady Workspaces after creating a workspace When installed in a workspace, CodeReady Workspaces plug-ins bring new capabilities to CodeReady Workspaces. Plug-ins consist of a Che-Theia plug-in, metadata, and a hosting container. These plug-ins may provide the following capabilities: Integrating with other systems, including OpenShift. Automating some developer tasks, such as formatting, refactoring, and running automated tests. Communicating with multiple databases directly from the IDE. Enhanced code navigation, auto-completion, and error highlighting. This chapter provides basic information about installing, enabling, and using CodeReady Workspaces plug-ins in workspaces. Section 5.9.1, "Additional tools in the CodeReady Workspaces workspace" Section 5.9.2, "Adding a language support plug-in to a CodeReady Workspaces workspace" 5.9.1. Additional tools in the CodeReady Workspaces workspace CodeReady Workspaces plug-ins are extensions to the Che-Theia IDE that come bundled with container images. These images contain the native prerequisites of their respective extensions. For example, the OpenShift command-line tool is bundled with a command to install it, which ensures the proper functionality of the OpenShift Connector plug-in, all available in the dedicated image. Plug-ins can also include metadata to define a description, categorization tags, and an icon. CodeReady Workspaces provides a registry of plug-ins available for installation into the user's workspace. The Che-Theia IDE is generally compatible with the Visual Studio Code extensions API and Visual Studio Code extensions are automatically compatible with Che-Theia. These extensions are possible to package as CodeReady Workspaces plug-ins by combining them with their dependencies. By default, CodeReady Workspaces includes a plug-in registry containing common plug-ins. Adding a plug-in Using the Dashboard: Add a plug-in directly into a devfile using the Devfile tab. The devfile can also further the plug-in configuration, such as defining memory or CPU consumption. Using the Che-Theia IDE: By pressing Ctrl + Shift + J or by navigating to View Plugins . Additional resources Adding components to a devfile 5.9.2. Adding a language support plug-in to a CodeReady Workspaces workspace This procedure describes adding a tool to a created workspace by enabling a dedicated plug-in from the Dashboard. Edit the workspace devfile from the Dashboard Devfile tab. Prerequisites A running instance of CodeReady Workspaces. See Installing CodeReady Workspaces . A created workspace that is defined in this instance of Red Hat CodeReady Workspaces. See ] and xref:creating-a-workspace-from-a-code-sample_crw[ . The workspace must be in a stopped state. The steps to stop a workspace: Navigate to the CodeReady Workspaces Dashboard, as explained in Section 1.1, "Navigating CodeReady Workspaces using the Dashboard" . In the Dashboard , click the Workspaces menu to open the workspaces list and locate the workspace. On the same row with the displayed workspace, on the right side of the screen, click Stop to stop the workspace. Wait a few seconds for the workspace to stop, and then configure the workspace by selecting it. Procedure To add a plug-in from the plug-in registry to a created CodeReady Workspaces workspace, install the plug-in as follows by adding content to the devfile: Navigate to the Devfile tab, where the devfile YAML is displayed. In the components devfile section, add the following lines: id and type . Example: Adding the Java 8 language plug-in - id: redhat/java8/latest type: chePlugin Example: The end result components: - id: redhat/php/latest memoryLimit: 1Gi type: chePlugin - id: redhat/php-debugger/latest memoryLimit: 256Mi type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: composer containerPath: {prod-home}/.composer - name: symfony containerPath: {prod-home}/.symfony alias: php image: 'quay.io/eclipse/che-php-7:nightly' - id: redhat/java8/latest type: chePlugin Click Save to save the changes. Restart the workspace. Verify that the workspace includes the new plug-in. Additional resources Devfile specifications 5.10. Using private container registries This section describes the necessary steps to use container images from private container registries. Prerequisites A running instance of CodeReady Workspaces. See https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc . Procedure Navigate to the CodeReady Workspaces Dashboard. See Section 1.1, "Navigating CodeReady Workspaces using the Dashboard" . Navigate to User Preferences . Click on your username in the top right corner. Click the User Preferences tab. Click the Add Container Registry button in Container Registries tab and execute following actions: Enter the container registry domain name in the Registry field. Optionally, enter the username of your account at this registry in the Username field. Enter the password in the Password field to authenticate in the container registry. Click the Add button. Verification See that there is a new entry in the Container Registries tab. Create a workspace that uses a container image from the specified container registry. See Section 4.2, "Authoring a devfile 2" . Additional resources Kubernetes documentation: Pull an Image from a Private Registry
[ "apiVersion: v2 publisher: eclipse name: che-machine-exec-plugin version: 7.9.2 type: Che Plugin displayName: CodeReady Workspaces machine-exec Service title: Che machine-exec Service Plugin description: CodeReady Workspaces Plug-in with che-machine-exec service to provide creation terminal or tasks for Eclipse CHE workspace containers. icon: https://www.eclipse.org/che/images/logo-eclipseche.svg repository: https://github.com/eclipse-che/che-machine-exec/ firstPublicationDate: \"2020-03-18\" category: Other spec: endpoints: - name: \"che-machine-exec\" public: true targetPort: 4444 attributes: protocol: ws type: terminal discoverable: false secure: true cookiesAuthEnabled: true containers: - name: che-machine-exec image: \"quay.io/eclipse/che-machine-exec:7.9.2\" ports: - exposedPort: 4444 command: ['/go/bin/che-machine-exec', '--static', '/cloud-shell', '--url', '127.0.0.1:4444']", "apiVersion: v2 category: Language description: This extension provides a live preview, syntax highlighting and snippets for the AsciiDoc format using Asciidoctor flavor displayName: AsciiDoc support firstPublicationDate: \"2019-12-02\" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: vscode-asciidoctor publisher: joaompinto repository: https://github.com/asciidoctor/asciidoctor-vscode title: AsciiDoctor Plug-in type: Visual Studio Code extension version: 2.7.7 spec: extensions: - https://github.com/asciidoctor/asciidoctor-vscode/releases/download/v2.7.7/asciidoctor-vscode-2.7.7.vsix", "FROM fedora:30 RUN dnf update -y && dnf install -y nodejs htop && node -v RUN mkdir /home/jboss ENV HOME=/home/jboss RUN mkdir /projects && chmod -R g+rwX /projects && chmod -R g+rwX \"USD{HOME}\" CMD USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}", "FROM fedora:30 RUN dnf update -y && dnf install -y nodejs htop && node -v RUN mkdir /home/jboss ENV HOME=/home/jboss RUN mkdir /projects && chmod -R g+rwX /projects && chmod -R g+rwX \"USD{HOME}\" ENTRYPOINT USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}", "#!/bin/sh set -e export USER_ID=USD(id -u) export GROUP_ID=USD(id -g) if ! whoami >/dev/null 2>&1; then echo \"USD{USER_NAME:-user}:x:USD{USER_ID}:0:USD{USER_NAME:-user} user:USD{HOME}:/bin/sh\" >> /etc/passwd fi Grant access to projects volume in case of non root user with sudo rights if [ \"USD{USER_ID}\" -ne 0 ] && command -v sudo >/dev/null 2>&1 && sudo -n true > /dev/null 2>&1; then sudo chown \"USD{USER_ID}:USD{GROUP_ID}\" /projects fi exec \"USD@\"", "FROM alpine:3.10.2 ENV HOME=/home/theia RUN mkdir /projects USD{HOME} && # Change permissions to let any arbitrary user for f in \"USD{HOME}\" \"/etc/passwd\" \"/projects\"; do echo \"Changing permissions on USD{f}\" && chgrp -R 0 USD{f} && chmod -R g+rwX USD{f}; done ADD entrypoint.sh /entrypoint.sh ENTRYPOINT [ \"/entrypoint.sh\" ] CMD USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}", "apiVersion: v2 category: Language description: \"Typescript language features\" displayName: Typescript firstPublicationDate: \"2019-10-28\" icon: \"https://www.eclipse.org/che/images/logo-eclipseche.svg\" name: typescript publisher: che-incubator repository: \"https://github.com/Microsoft/vscode\" title: \"Typescript language features\" type: \"Visual Studio Code extension\" version: remote-bin-with-override-entrypoint spec: containers: - image: \"example/fedora-for-ts-remote-plugin-without-endpoint:latest\" memoryLimit: 512Mi name: vscode-typescript command: - sh - -c args: - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - \"https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix\"", "apiVersion: v2 category: Language description: \"Typescript language features\" displayName: Typescript firstPublicationDate: \"2019-10-28\" icon: \"https://www.eclipse.org/che/images/logo-eclipseche.svg\" name: typescript publisher: che-incubator repository: \"https://github.com/Microsoft/vscode\" title: \"Typescript language features\" type: \"Visual Studio Code extension\" version: remote-bin-with-override-entrypoint spec: containers: - image: \"example/fedora-for-ts-remote-plugin-without-endpoint:latest\" memoryLimit: 512Mi name: vscode-typescript args: - sh - -c - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - \"https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix\"", "- type: chePlugin id: 1", "- type: chePlugin reference: 1", "- id: publisher/my-vscode-ext 1 repository: 2 url: https://github.com/publisher/my-vscode-ext 3 revision: 1.7.2 4 aliases: 5 - publisher/my-vscode-ext-revised preferences: 6 asciidoc.use_asciidoctorpdf: true shellcheck.executablePath: /bin/shellcheck solargraph.bundlerPath: /usr/local/bin/bundle solargraph.commandPath: /usr/local/bundle/bin/solargraph sidecar: 7 image: quay.io/repository/eclipse/che-plugin-sidecar:sonarlint-2fcf341 8 name: my-vscode-ext-sidecar 9 memoryLimit: \"1500Mi\" 10 memoryRequest: \"1000Mi\" 11 cpuLimit: \"500m\" 12 cpuRequest: \"125m\" 13 command: 14 - /bin/sh args: 15 - \"-c\" - \"./entrypoint.sh\" volumeMounts: 16 - name: vscode-ext-volume 17 path: \"/home/theia/my-vscode-ext\" 18 endpoints: 19 - name: \"configuration-endpoint\" 20 public: true 21 targetPort: 61436 22 attributes: 23 protocol: http extension: https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix 24 skipDependencies: 25 - id-of/extension1 - id-of/extension2 extraDependencies: 26 - id-of/extension1 - id-of/extension2 metaYaml: skipIndex: <true|false> 27 skipDependencies: 28 - id-of/extension1 - id-of/extension2 extraDependencies: 29 - id-of/extension1 - id-of/extension2", "apiVersion: v2 1 publisher: myorg 2 name: my-vscode-ext 3 version: 1.7.2 4 type: value 5 displayName: 6 title: 7 description: 8 icon: https://www.eclipse.org/che/images/logo-eclipseche.svg 9 repository: 10 category: 11 spec: containers: 12 - image: 13 memoryLimit: 14 memoryRequest: 15 cpuLimit: 16 cpuRequest: 17 extensions: 18 - https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix - https://github.com/SonarSource/sonarlint-vscode/releases/download/1.16.0/sonarlint-vscode-1.16.0.vsix", "components: - id: joaopinto/vscode-asciidoctor/latest type: chePlugin", "git clone https://gist.github.com/ <your-github-username> / <gist-id>", "git clone https://gist.github.com/benoitf/85c60c8c439177ac50141d527729b9d9 1 Cloning into '85c60c8c439177ac50141d527729b9d9' remote: Enumerating objects: 3, done. remote: Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (3/3), done.", "cd <gist-directory-name> 1", "apiVersion: v2 publisher: CoenraadS name: bracket-pair-colorizer version: 1.0.61 type: Visual Studio Code extension displayName: Bracket Pair Colorizer title: Bracket Pair Colorizer description: Bracket Pair Colorizer icon: https://raw.githubusercontent.com/redhat-developer/codeready-workspaces/crw-2-rhel-8/dependencies/che-plugin-registry/resources/images/default.svg?sanitize=true repository: https://github.com/CoenraadS/BracketPair category: Language firstPublicationDate: '2020-07-30' spec: 1 extensions: - \"{{REPOSITORY}}/CoenraadS.bracket-pair-colorizer-1.0.61.vsix\" 2 latestUpdateDate: \"2020-07-30\"", "spec: containers: - image: \"quay.io/eclipse/che-sidecar-java:8-0cfbacb\" name: vscode-java memoryLimit: \"1500Mi\" volumes: - mountPath: \"/home/theia/.m2\" name: m2", "apiVersion: 1.0.0 metadata: generateName: java-maven- projects: - name: console-java-simple source: type: git location: \"https://github.com/che-samples/console-java-simple.git\" branch: java1.11 components: - type: chePlugin id: redhat/java11/latest - type: chePlugin 1 reference: \"{{REPOSITORY}}/plugin.yaml\" - type: dockerimage alias: maven image: quay.io/eclipse/che-java11-maven:nightly memoryLimit: 512Mi mountSources: true volumes: - name: m2 containerPath: /home/user/.m2 commands: - name: maven build actions: - type: exec component: maven command: \"mvn clean install\" workdir: USD{CHE_PROJECTS_ROOT}/console-java-simple - name: maven build and run actions: - type: exec component: maven command: \"mvn clean install && java -jar ./target/*.jar\" workdir: USD{CHE_PROJECTS_ROOT}/console-java-simple", "ls -la .git CoenraadS.bracket-pair-colorizer-1.0.61.vsix README.md devfile.yaml plugin.yaml", "#!/bin/sh get modified files FILES=USD(git diff --cached --name-only --diff-filter=ACMR \"*.yaml\" | sed 's| |\\\\ |g') exit fast if no files found [ -z \"USDFILES\" ] && exit 0 grab remote origin origin=USD(git config --get remote.origin.url) url=\"USD{origin}/raw\" iterate on files and add the good prefix pattern for FILE in USD{FILES}; do sed -e \"s#{{REPOSITORY}}#USD{url}#g\" \"USD{FILE}\" > \"USD{FILE}.back\" mv \"USD{FILE}.back\" \"USD{FILE}\" done Add back to staging echo \"USDFILES\" | xargs git add exit 0", "chmod u+x .git/hooks/pre-commit", "Add files git add * Commit git commit -m \"Initial Commit for the test of our extension\" [main 98dd370] Initial Commit for the test of our extension 3 files changed, 61 insertions(+) create mode 100644 CoenraadS.bracket-pair-colorizer-1.0.61.vsix create mode 100644 devfile.yaml create mode 100644 plugin.yaml and push the files to the main branch git push origin", "echo \"USD(git config --get remote.origin.url)/raw/devfile.yaml\"", "echo \"https:// <che-server> /#USD(git config --get remote.origin.url)/raw/devfile.yaml\"", "components: - type: cheEditor id: registry.redhat.io/codeready-workspaces/idea-rhel8:2.15", "components: - type: cheEditor reference: https://raw.githubusercontent.com/che-incubator/jetbrains-editor-images/meta/che-pycharm/latest.meta.yaml", "components: - type: cheEditor reference: \" <URL_to_meta.yaml> \" 1", "git clone https://github.com/che-incubator/jetbrains-editor-images cd jetbrains-editor-images", "./projector.sh build [info] Select the IDE package to build (default is 'IntelliJ IDEA Community'): 1) IntelliJ IDEA Community 2) PyCharm Community [info] Select the IDE package version to build (default is '2020.3.3'): 1) 2020.3.3 2) 2020.3.2 3) 2020.3.1", "./projector.sh run", "apiVersion: v2 publisher: <publisher> 1 name: intellij-ide version: latest type: Che Editor displayName: IntelliJ Platform IDE title: IDE based on the IntelliJ Platform description: IDE based on the IntelliJ Platform running using Projector icon: https://www.jetbrains.com/apple-touch-icon.png category: Editor repository: https://github.com/che-incubator/jetbrains-editor-images firstPublicationDate: \"2021-04-10\" spec: endpoints: - name: intellij public: true targetPort: 8887 attributes: protocol: http type: ide path: /projector-client/index.html?backgroundColor=434343&wss containers: - name: intellij-ide image: <registry>/<image>:<tag> 2 mountSources: true volumes: - mountPath: \"/home/projector-user\" name: projector-user ports: - exposedPort: 8887 memoryLimit: \"4096M\"", "git clone https://github.com/che-incubator/jetbrains-editor-images cd jetbrains-editor-images", "./projector build --tag <tag> --url <URL>", "./projector.sh build --tag che-idea:2020.3.3 --url https://download-cdn.jetbrains.com/idea/ideaIC-2020.3.3.tar.gz", "./projector.sh build --tag che-pycharm:2020.3.5 --url https://download.jetbrains.com/python/pycharm-community-2020.3.5.tar.gz", "./projector.sh build --tag che-webstorm:2020.3.3 --url https://download.jetbrains.com/webstorm/WebStorm-2020.3.3.tar.gz", "./projector.sh build --tag che-idea-ultimate:2020.2.2 --url https://download.jetbrains.com/idea/ideaIU-2020.2.2.tar.gz", "./projector.sh build --tag che-android-studio:4.2.0.22 --url https://redirector.gvt1.com/edgedl/android/studio/ide-zips/4.2.0.22/android-studio-ide-202.7188722-linux.tar.gz", "./projector.sh run <tag>", "./projector.sh run che-idea:2020.3.3", "./projector.sh run che-pycharm:2020.3.5", "./projector.sh run che-webstorm:2020.3.3", "./projector.sh run che-idea-ultimate:2020.2.2", "./projector.sh run che-android-studio:4.2.0.22", "apiVersion: v2 publisher: <publisher> 1 name: intellij-ide version: latest type: Che Editor displayName: IntelliJ Platform IDE title: IDE based on the IntelliJ Platform description: IDE based on the IntelliJ Platform running using Projector icon: https://www.jetbrains.com/apple-touch-icon.png category: Editor repository: https://github.com/che-incubator/jetbrains-editor-images firstPublicationDate: \"2021-04-10\" spec: endpoints: - name: intellij public: true targetPort: 8887 attributes: protocol: http type: ide path: /projector-client/index.html?backgroundColor=434343&wss containers: - name: intellij-ide image: <registry>/<image>:<tag> 2 mountSources: true volumes: - mountPath: \"/home/projector-user\" name: projector-user ports: - exposedPort: 8887 memoryLimit: \"4096M\"", "openssl base64 -e -A -in ' <License ID> - for 2018.1 or later.txt'", "apiVersion: v1 kind: Secret metadata: name: jetbrains-offline-activation-code labels: app.kubernetes.io/component: workspace-secret app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/automount-workspace-secret: 'false' 1 che.eclipse.org/mount-path: /tmp/ che.eclipse.org/mount-as: file data: idea.key: <base64_encoded_activation_code> pycharm.key: <base64_encoded_activation_code> webstorm.key: <base64_encoded_activation_code> phpstorm.key: <base64_encoded_activation_code> goland.key: <base64_encoded_activation_code>", "oc apply -f secret.yaml", "components: - type: cheEditor automountWorkspaceSecrets: true reference: \" <URL_to_meta.yaml> \"", "version: 1.0.0 editors: - id: eclipse/che-theia/next title: Eclipse Theia development version. displayName: theia-ide description: Eclipse Theia, get the latest release each day. icon: https://raw.githubusercontent.com/theia-ide/theia/master/logo/theia-logo-no-text-black.svg?sanitize=true repository: https://github.com/eclipse-che/che-theia firstPublicationDate: \"2021-01-01\" endpoints: - name: \"theia\" public: true targetPort: 3100 attributes: protocol: http type: ide secure: true cookiesAuthEnabled: true discoverable: false containers: - name: theia-ide image: \"<your-ide-image>\" mountSources: true ports: - exposedPort: 3100 memoryLimit: \"512M\"", "apiVersion: 1.0.0 metadata: name: che-theia-based-ide components: - type: cheEditor reference: '<meta.yaml URL>'", "- id: redhat/java8/latest type: chePlugin", "components: - id: redhat/php/latest memoryLimit: 1Gi type: chePlugin - id: redhat/php-debugger/latest memoryLimit: 256Mi type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: composer containerPath: {prod-home}/.composer - name: symfony containerPath: {prod-home}/.symfony alias: php image: 'quay.io/eclipse/che-php-7:nightly' - id: redhat/java8/latest type: chePlugin" ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/end-user_guide/customizing-developer-environments_crw
Chapter 32. Jira Add Comment Sink
Chapter 32. Jira Add Comment Sink Add a new comment to an existing issue in Jira. The Kamelet expects the following headers to be set: issueKey / ce-issueKey : as the issue code. The comment is set in the body of the message. 32.1. Configuration Options The following table summarizes the configuration options available for the jira-add-comment-sink Kamelet: Property Name Description Type Default Example jiraUrl * Jira URL The URL of your instance of Jira string "http://my_jira.com:8081" password * Password The password or the API Token to access Jira string username * Username The username to access Jira string Note Fields marked with an asterisk (*) are mandatory. 32.2. Dependencies At runtime, the jira-add-comment-sink Kamelet relies upon the presence of the following dependencies: camel:core camel:jackson camel:jira camel:kamelet mvn:com.fasterxml.jackson.datatype:jackson-datatype-joda:2.12.4.redhat-00001 32.3. Usage This section describes how you can use the jira-add-comment-sink . 32.3.1. Knative Sink You can use the jira-add-comment-sink Kamelet as a Knative sink by binding it to a Knative object. jira-add-comment-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: "jira server url" username: "username" password: "password" 32.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 32.3.1.2. Procedure for using the cluster CLI Save the jira-add-comment-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-comment-sink-binding.yaml 32.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-comment-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password="password"\&username="username"\&jiraUrl="jira url" This command creates the KameletBinding in the current namespace on the cluster. 32.3.2. Kafka Sink You can use the jira-add-comment-sink Kamelet as a Kafka sink by binding it to a Kafka topic. jira-add-comment-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: "issueKey" value: "MYP-167" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-comment-sink properties: jiraUrl: "jira server url" username: "username" password: "password" 32.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 32.3.2.2. Procedure for using the cluster CLI Save the jira-add-comment-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f jira-add-comment-sink-binding.yaml 32.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind --name jira-add-comment-sink-binding timer-source?message="The new comment"\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password="password"\&username="username"\&jiraUrl="jira url" This command creates the KameletBinding in the current namespace on the cluster. 32.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jira-add-comment-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-167\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-add-comment-sink-binding.yaml", "kamel bind --name jira-add-comment-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password=\"password\"\\&username=\"username\"\\&jiraUrl=\"jira url\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jira-add-comment-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: insert-header-action properties: name: \"issueKey\" value: \"MYP-167\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jira-add-comment-sink properties: jiraUrl: \"jira server url\" username: \"username\" password: \"password\"", "apply -f jira-add-comment-sink-binding.yaml", "kamel bind --name jira-add-comment-sink-binding timer-source?message=\"The new comment\"\\&period=60000 --step insert-header-action -p step-0.name=issueKey -p step-0.value=MYP-167 jira-add-comment-sink?password=\"password\"\\&username=\"username\"\\&jiraUrl=\"jira url\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/jira-add-comment-sink
Chapter 14. Ingress [config.openshift.io/v1]
Chapter 14. Ingress [config.openshift.io/v1] Description Ingress holds cluster-wide information about ingress, including the default ingress domain used for routes. The canonical name is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 14.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 14.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description appsDomain string appsDomain is an optional domain to use instead of the one specified in the domain field when a Route is created without specifying an explicit host. If appsDomain is nonempty, this value is used to generate default host values for Route. Unlike domain, appsDomain may be modified after installation. This assumes a new ingresscontroller has been setup with a wildcard certificate. componentRoutes array componentRoutes is an optional list of routes that are managed by OpenShift components that a cluster-admin is able to configure the hostname and serving certificate for. The namespace and name of each route in this list should match an existing entry in the status.componentRoutes list. To determine the set of configurable Routes, look at namespace and name of entries in the .status.componentRoutes list, where participating operators write the status of configurable routes. componentRoutes[] object ComponentRouteSpec allows for configuration of a route's hostname and serving certificate. domain string domain is used to generate a default host name for a route when the route's host name is empty. The generated host name will follow this pattern: "<route-name>.<route-namespace>.<domain>". It is also used as the default wildcard domain suffix for ingress. The default ingresscontroller domain will follow this pattern: "*.<domain>". Once set, changing domain is not currently supported. loadBalancer object loadBalancer contains the load balancer details in general which are not only specific to the underlying infrastructure provider of the current cluster and are required for Ingress Controller to work on OpenShift. requiredHSTSPolicies array requiredHSTSPolicies specifies HSTS policies that are required to be set on newly created or updated routes matching the domainPattern/s and namespaceSelector/s that are specified in the policy. Each requiredHSTSPolicy must have at least a domainPattern and a maxAge to validate a route HSTS Policy route annotation, and affect route admission. A candidate route is checked for HSTS Policies if it has the HSTS Policy route annotation: "haproxy.router.openshift.io/hsts_header" E.g. haproxy.router.openshift.io/hsts_header: max-age=31536000;preload;includeSubDomains - For each candidate route, if it matches a requiredHSTSPolicy domainPattern and optional namespaceSelector, then the maxAge, preloadPolicy, and includeSubdomainsPolicy must be valid to be admitted. Otherwise, the route is rejected. - The first match, by domainPattern and optional namespaceSelector, in the ordering of the RequiredHSTSPolicies determines the route's admission status. - If the candidate route doesn't match any requiredHSTSPolicy domainPattern and optional namespaceSelector, then it may use any HSTS Policy annotation. The HSTS policy configuration may be changed after routes have already been created. An update to a previously admitted route may then fail if the updated route does not conform to the updated HSTS policy configuration. However, changing the HSTS policy configuration will not cause a route that is already admitted to stop working. Note that if there are no RequiredHSTSPolicies, any HSTS Policy annotation on the route is valid. requiredHSTSPolicies[] object 14.1.2. .spec.componentRoutes Description componentRoutes is an optional list of routes that are managed by OpenShift components that a cluster-admin is able to configure the hostname and serving certificate for. The namespace and name of each route in this list should match an existing entry in the status.componentRoutes list. To determine the set of configurable Routes, look at namespace and name of entries in the .status.componentRoutes list, where participating operators write the status of configurable routes. Type array 14.1.3. .spec.componentRoutes[] Description ComponentRouteSpec allows for configuration of a route's hostname and serving certificate. Type object Required hostname name namespace Property Type Description hostname string hostname is the hostname that should be used by the route. name string name is the logical name of the route to customize. The namespace and name of this componentRoute must match a corresponding entry in the list of status.componentRoutes if the route is to be customized. namespace string namespace is the namespace of the route to customize. The namespace and name of this componentRoute must match a corresponding entry in the list of status.componentRoutes if the route is to be customized. servingCertKeyPairSecret object servingCertKeyPairSecret is a reference to a secret of type kubernetes.io/tls in the openshift-config namespace. The serving cert/key pair must match and will be used by the operator to fulfill the intent of serving with this name. If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. 14.1.4. .spec.componentRoutes[].servingCertKeyPairSecret Description servingCertKeyPairSecret is a reference to a secret of type kubernetes.io/tls in the openshift-config namespace. The serving cert/key pair must match and will be used by the operator to fulfill the intent of serving with this name. If the custom hostname uses the default routing suffix of the cluster, the Secret specification for a serving certificate will not be needed. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 14.1.5. .spec.loadBalancer Description loadBalancer contains the load balancer details in general which are not only specific to the underlying infrastructure provider of the current cluster and are required for Ingress Controller to work on OpenShift. Type object Property Type Description platform object platform holds configuration specific to the underlying infrastructure provider for the ingress load balancers. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. 14.1.6. .spec.loadBalancer.platform Description platform holds configuration specific to the underlying infrastructure provider for the ingress load balancers. When omitted, this means the user has no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Property Type Description aws object aws contains settings specific to the Amazon Web Services infrastructure provider. type string type is the underlying infrastructure provider for the cluster. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "KubeVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. 14.1.7. .spec.loadBalancer.platform.aws Description aws contains settings specific to the Amazon Web Services infrastructure provider. Type object Required type Property Type Description type string type allows user to set a load balancer type. When this field is set the default ingresscontroller will get created using the specified LBType. If this field is not set then the default ingress controller of LBType Classic will be created. Valid values are: * "Classic": A Classic Load Balancer that makes routing decisions at either the transport layer (TCP/SSL) or the application layer (HTTP/HTTPS). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#clb * "NLB": A Network Load Balancer that makes routing decisions at the transport layer (TCP/SSL). See the following for additional details: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html#nlb 14.1.8. .spec.requiredHSTSPolicies Description requiredHSTSPolicies specifies HSTS policies that are required to be set on newly created or updated routes matching the domainPattern/s and namespaceSelector/s that are specified in the policy. Each requiredHSTSPolicy must have at least a domainPattern and a maxAge to validate a route HSTS Policy route annotation, and affect route admission. A candidate route is checked for HSTS Policies if it has the HSTS Policy route annotation: "haproxy.router.openshift.io/hsts_header" E.g. haproxy.router.openshift.io/hsts_header: max-age=31536000;preload;includeSubDomains - For each candidate route, if it matches a requiredHSTSPolicy domainPattern and optional namespaceSelector, then the maxAge, preloadPolicy, and includeSubdomainsPolicy must be valid to be admitted. Otherwise, the route is rejected. - The first match, by domainPattern and optional namespaceSelector, in the ordering of the RequiredHSTSPolicies determines the route's admission status. - If the candidate route doesn't match any requiredHSTSPolicy domainPattern and optional namespaceSelector, then it may use any HSTS Policy annotation. The HSTS policy configuration may be changed after routes have already been created. An update to a previously admitted route may then fail if the updated route does not conform to the updated HSTS policy configuration. However, changing the HSTS policy configuration will not cause a route that is already admitted to stop working. Note that if there are no RequiredHSTSPolicies, any HSTS Policy annotation on the route is valid. Type array 14.1.9. .spec.requiredHSTSPolicies[] Description Type object Required domainPatterns Property Type Description domainPatterns array (string) domainPatterns is a list of domains for which the desired HSTS annotations are required. If domainPatterns is specified and a route is created with a spec.host matching one of the domains, the route must specify the HSTS Policy components described in the matching RequiredHSTSPolicy. The use of wildcards is allowed like this: .foo.com matches everything under foo.com. foo.com only matches foo.com, so to cover foo.com and everything under it, you must specify *both . includeSubDomainsPolicy string includeSubDomainsPolicy means the HSTS Policy should apply to any subdomains of the host's domain name. Thus, for the host bar.foo.com, if includeSubDomainsPolicy was set to RequireIncludeSubDomains: - the host app.bar.foo.com would inherit the HSTS Policy of bar.foo.com - the host bar.foo.com would inherit the HSTS Policy of bar.foo.com - the host foo.com would NOT inherit the HSTS Policy of bar.foo.com - the host def.foo.com would NOT inherit the HSTS Policy of bar.foo.com maxAge object maxAge is the delta time range in seconds during which hosts are regarded as HSTS hosts. If set to 0, it negates the effect, and hosts are removed as HSTS hosts. If set to 0 and includeSubdomains is specified, all subdomains of the host are also removed as HSTS hosts. maxAge is a time-to-live value, and if this policy is not refreshed on a client, the HSTS policy will eventually expire on that client. namespaceSelector object namespaceSelector specifies a label selector such that the policy applies only to those routes that are in namespaces with labels that match the selector, and are in one of the DomainPatterns. Defaults to the empty LabelSelector, which matches everything. preloadPolicy string preloadPolicy directs the client to include hosts in its host preload list so that it never needs to do an initial load to get the HSTS header (note that this is not defined in RFC 6797 and is therefore client implementation-dependent). 14.1.10. .spec.requiredHSTSPolicies[].maxAge Description maxAge is the delta time range in seconds during which hosts are regarded as HSTS hosts. If set to 0, it negates the effect, and hosts are removed as HSTS hosts. If set to 0 and includeSubdomains is specified, all subdomains of the host are also removed as HSTS hosts. maxAge is a time-to-live value, and if this policy is not refreshed on a client, the HSTS policy will eventually expire on that client. Type object Property Type Description largestMaxAge integer The largest allowed value (in seconds) of the RequiredHSTSPolicy max-age This value can be left unspecified, in which case no upper limit is enforced. smallestMaxAge integer The smallest allowed value (in seconds) of the RequiredHSTSPolicy max-age Setting max-age=0 allows the deletion of an existing HSTS header from a host. This is a necessary tool for administrators to quickly correct mistakes. This value can be left unspecified, in which case no lower limit is enforced. 14.1.11. .spec.requiredHSTSPolicies[].namespaceSelector Description namespaceSelector specifies a label selector such that the policy applies only to those routes that are in namespaces with labels that match the selector, and are in one of the DomainPatterns. Defaults to the empty LabelSelector, which matches everything. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 14.1.12. .spec.requiredHSTSPolicies[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 14.1.13. .spec.requiredHSTSPolicies[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 14.1.14. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description componentRoutes array componentRoutes is where participating operators place the current route status for routes whose hostnames and serving certificates can be customized by the cluster-admin. componentRoutes[] object ComponentRouteStatus contains information allowing configuration of a route's hostname and serving certificate. defaultPlacement string defaultPlacement is set at installation time to control which nodes will host the ingress router pods by default. The options are control-plane nodes or worker nodes. This field works by dictating how the Cluster Ingress Operator will consider unset replicas and nodePlacement fields in IngressController resources when creating the corresponding Deployments. See the documentation for the IngressController replicas and nodePlacement fields for more information. When omitted, the default value is Workers 14.1.15. .status.componentRoutes Description componentRoutes is where participating operators place the current route status for routes whose hostnames and serving certificates can be customized by the cluster-admin. Type array 14.1.16. .status.componentRoutes[] Description ComponentRouteStatus contains information allowing configuration of a route's hostname and serving certificate. Type object Required defaultHostname name namespace relatedObjects Property Type Description conditions array conditions are used to communicate the state of the componentRoutes entry. Supported conditions include Available, Degraded and Progressing. If available is true, the content served by the route can be accessed by users. This includes cases where a default may continue to serve content while the customized route specified by the cluster-admin is being configured. If Degraded is true, that means something has gone wrong trying to handle the componentRoutes entry. The currentHostnames field may or may not be in effect. If Progressing is true, that means the component is taking some action related to the componentRoutes entry. conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } consumingUsers array (string) consumingUsers is a slice of ServiceAccounts that need to have read permission on the servingCertKeyPairSecret secret. currentHostnames array (string) currentHostnames is the list of current names used by the route. Typically, this list should consist of a single hostname, but if multiple hostnames are supported by the route the operator may write multiple entries to this list. defaultHostname string defaultHostname is the hostname of this route prior to customization. name string name is the logical name of the route to customize. It does not have to be the actual name of a route resource but it cannot be renamed. The namespace and name of this componentRoute must match a corresponding entry in the list of spec.componentRoutes if the route is to be customized. namespace string namespace is the namespace of the route to customize. It must be a real namespace. Using an actual namespace ensures that no two components will conflict and the same component can be installed multiple times. The namespace and name of this componentRoute must match a corresponding entry in the list of spec.componentRoutes if the route is to be customized. relatedObjects array relatedObjects is a list of resources which are useful when debugging or inspecting how spec.componentRoutes is applied. relatedObjects[] object ObjectReference contains enough information to let you inspect or modify the referred object. 14.1.17. .status.componentRoutes[].conditions Description conditions are used to communicate the state of the componentRoutes entry. Supported conditions include Available, Degraded and Progressing. If available is true, the content served by the route can be accessed by users. This includes cases where a default may continue to serve content while the customized route specified by the cluster-admin is being configured. If Degraded is true, that means something has gone wrong trying to handle the componentRoutes entry. The currentHostnames field may or may not be in effect. If Progressing is true, that means the component is taking some action related to the componentRoutes entry. Type array 14.1.18. .status.componentRoutes[].conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 14.1.19. .status.componentRoutes[].relatedObjects Description relatedObjects is a list of resources which are useful when debugging or inspecting how spec.componentRoutes is applied. Type array 14.1.20. .status.componentRoutes[].relatedObjects[] Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Required group name resource Property Type Description group string group of the referent. name string name of the referent. namespace string namespace of the referent. resource string resource of the referent. 14.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/ingresses DELETE : delete collection of Ingress GET : list objects of kind Ingress POST : create an Ingress /apis/config.openshift.io/v1/ingresses/{name} DELETE : delete an Ingress GET : read the specified Ingress PATCH : partially update the specified Ingress PUT : replace the specified Ingress /apis/config.openshift.io/v1/ingresses/{name}/status GET : read status of the specified Ingress PATCH : partially update status of the specified Ingress PUT : replace status of the specified Ingress 14.2.1. /apis/config.openshift.io/v1/ingresses Table 14.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Ingress Table 14.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Ingress Table 14.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 14.5. HTTP responses HTTP code Reponse body 200 - OK IngressList schema 401 - Unauthorized Empty HTTP method POST Description create an Ingress Table 14.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.7. Body parameters Parameter Type Description body Ingress schema Table 14.8. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 202 - Accepted Ingress schema 401 - Unauthorized Empty 14.2.2. /apis/config.openshift.io/v1/ingresses/{name} Table 14.9. Global path parameters Parameter Type Description name string name of the Ingress Table 14.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Ingress Table 14.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 14.12. Body parameters Parameter Type Description body DeleteOptions schema Table 14.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Ingress Table 14.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.15. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Ingress Table 14.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.17. Body parameters Parameter Type Description body Patch schema Table 14.18. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Ingress Table 14.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.20. Body parameters Parameter Type Description body Ingress schema Table 14.21. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty 14.2.3. /apis/config.openshift.io/v1/ingresses/{name}/status Table 14.22. Global path parameters Parameter Type Description name string name of the Ingress Table 14.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Ingress Table 14.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 14.25. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Ingress Table 14.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.27. Body parameters Parameter Type Description body Patch schema Table 14.28. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Ingress Table 14.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 14.30. Body parameters Parameter Type Description body Ingress schema Table 14.31. HTTP responses HTTP code Reponse body 200 - OK Ingress schema 201 - Created Ingress schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/ingress-config-openshift-io-v1
Machine management
Machine management OpenShift Container Platform 4.15 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/machine_management/index
Chapter 5. Setting up Systems as IdM Clients
Chapter 5. Setting up Systems as IdM Clients A client is any system which is a member of the Identity Management domain. While this is frequently a Red Hat Enterprise Linux system (and IdM has special tools to make configuring Red Hat Enterprise Linux clients very simple), machines with other operating systems can also be added to the IdM domain. One important aspect of an IdM client is that only the system configuration determines whether the system is part of the domain. (The configuration includes things like belonging to the Kerberos domain, DNS domain, and having the proper authentication and certificate setup.) Note IdM does not require any sort of agent or daemon running on a client for the client to join the domain. However, for the best management options, security, and performance, clients should run the System Security Services Daemon (SSSD). For more information on SSSD, see the SSSD chapter in the Deployment Guide the SSSD project page . This chapter explains how to configure a system to join an IdM domain. Note Clients can only be configured after at least one IdM server has been installed. 5.1. What Happens in Client Setup Whether the client configuration is performed automatically on Red Hat Enterprise Linux systems using the client setup script or manually on other systems, the general process of configuring a machine to serve as an IdM client is mostly the same, with slight variation depending on the platform: Retrieve the CA certificate for the IdM CA. Create a separate Kerberos configuration to test the provided credentials. This enables a Kerberos connection to the IdM XML-RPC server, necessary to join the IdM client to the IdM domain. This Kerberos configuration is ultimately discarded. Setting up the Kerberos configuration includes specifying the realm and domain details, and default ticket attributes. Forwardable tickets are configured by default, which facilitates connection to the administration interface from any operating system, and also provides for auditing of administration operations. For example, this is the Kerberos configuration for Red Hat Enterprise Linux systems: Run the ipa-join command to perform the actual join. Obtain a service principal for the host service and installs it into /etc/krb5.keytab . For example, host/[email protected] . Enable certmonger, retrieve an SSL server certificate, and install the certificate in /etc/pki/nssdb . Disable the nscd daemon. Configure SSSD or LDAP/KRB5, including NSS and PAM configuration files. Configure an OpenSSH server and client, as well as enabling the host to create DNS SSHFP records. Configure NTP.
[ "[libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = false rdns = false forwardable = yes ticket_lifetime = 24h [realms] EXAMPLE.COM = { kdc = server.example.com:88 admin_server = server.example.com:749 } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/setting-up-clients
Chapter 376. XPath Language
Chapter 376. XPath Language Available as of Camel version 1.1 Camel supports XPath to allow an Expression or Predicate to be used in the DSL or Xml Configuration . For example you could use XPath to create an Predicate in a Message Filter or as an Expression for a Recipient List. Streams If the message body is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . So often when you use XPath as Message Filter or Content Based Router then you need to access the data multiple times, and you should use Stream Caching or convert the message body to a String prior which is safe to be re-read multiple times. from("queue:foo"). filter().xpath("//foo")). to("queue:bar") from("queue:foo"). choice().xpath("//foo")).to("queue:bar"). otherwise().to("queue:others"); 376.1. XPath Language options The XPath language supports 9 options, which are listed below. Name Default Java Type Description documentType String Name of class for document type The default value is org.w3c.dom.Document resultType NODESET String Sets the class name of the result type (type from output) The default result type is NodeSet saxon false Boolean Whether to use Saxon. factoryRef String References to a custom XPathFactory to lookup in the registry objectModel String The XPath object model to use logNamespaces false Boolean Whether to log namespaces which can assist during trouble shooting headerName String Name of header to use as input, instead of the message body threadSafety false Boolean Whether to enable thread-safety for the returned result of the xpath expression. This applies to when using NODESET as the result type, and the returned set has multiple elements. In this situation there can be thread-safety issues if you process the NODESET concurrently such as from a Camel Splitter EIP in parallel processing mode. This option prevents concurrency issues by doing defensive copies of the nodes. It is recommended to turn this option on if you are using camel-saxon or Saxon in your application. Saxon has thread-safety issues which can be prevented by turning this option on. trim true Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks 376.2. Namespaces You can easily use namespaces with XPath expressions using the Namespaces helper class. 376.3. Variables Variables in XPath is defined in different namespaces. The default namespace is http://camel.apache.org/schema/spring . Namespace URI Local part Type Description http://camel.apache.org/xml/in/ in Message the exchange.in message http://camel.apache.org/xml/out/ out Message the exchange.out message http://camel.apache.org/xml/function/ functions Object Camel 2.5: Additional functions http://camel.apache.org/xml/variables/environment-variables env Object OS environment variables http://camel.apache.org/xml/variables/system-properties system Object Java System properties http://camel.apache.org/xml/variables/exchange-property Object the exchange property Camel will resolve variables according to either: namespace given no namespace given 376.3.1. Namespace given If the namespace is given then Camel is instructed exactly what to return. However when resolving either in or out Camel will try to resolve a header with the given local part first, and return it. If the local part has the value body then the body is returned instead. 376.3.2. No namespace given If there is no namespace given then Camel resolves only based on the local part. Camel will try to resolve a variable in the following steps: from variables that has been set using the variable(name, value) fluent builder from message.in.header if there is a header with the given key from exchange.properties if there is a property with the given key 376.4. Functions Camel adds the following XPath functions that can be used to access the exchange: Function Argument Type Description in:body none Object Will return the in message body. in:header the header name Object Will return the in message header. out:body none Object Will return the out message body. out:header the header name Object Will return the out message header. function:properties key for property String Camel 2.5: To lookup a property using the Properties component (property placeholders). function:simple simple expression Object Camel 2.5: To evaluate a Simple expression. Caution function:properties and function:simple is not supported when the return type is a NodeSet , such as when using with a Splitter EIP. Here's an example showing some of these functions in use. And the new functions introduced in Camel 2.5: 376.5. Using XML configuration If you prefer to configure your routes in your Spring XML file then you can use XPath expressions as follows <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://activemq.apache.org/camel/schema/spring" xmlns:foo="http://example.com/person"> <route> <from uri="activemq:MyQueue"/> <filter> <xpath>/foo:person[@name='James']</xpath> <to uri="mqseries:SomeOtherQueue"/> </filter> </route> </camelContext> </beans> Notice how we can reuse the namespace prefixes, foo in this case, in the XPath expression for easier namespace based XPath expressions! See also this discussion on the mailinglist about using your own namespaces with xpath 376.6. Setting result type The XPath expression will return a result type using native XML objects such as org.w3c.dom.NodeList . But many times you want a result type to be a String. To do this you have to instruct the XPath which result type to use. In Java DSL: xpath("/foo:person/@id", String.class) In Spring DSL you use the resultType attribute to provide a fully qualified classname: <xpath resultType="java.lang.String">/foo:person/@id</xpath> In @XPath: Available as of Camel 2.1 @XPath(value = "concat('foo-',//order/name/)", resultType = String.class) String name) Where we use the xpath function concat to prefix the order name with foo- . In this case we have to specify that we want a String as result type so the concat function works. 376.7. Using XPath on Headers Available as of Camel 2.11 Some users may have XML stored in a header. To apply an XPath to a header's value you can do this by defining the 'headerName' attribute. And in Java DSL you specify the headerName as the 2nd parameter as shown: xpath("/invoice/@orderType = 'premium'", "invoiceDetails") 376.8. Examples Here is a simple example using an XPath expression as a predicate in a Message Filter If you have a standard set of namespaces you wish to work with and wish to share them across many different XPath expressions you can use the NamespaceBuilder as shown in this example In this sample we have a choice construct. The first choice evaulates if the message has a header key type that has the value Camel . The 2nd choice evaluates if the message body has a name tag <name> which values is Kong . If neither is true the message is routed in the otherwise block: And the spring XML equivalent of the route: 376.9. XPath injection You can use Bean Integration to invoke a method on a bean and use various languages such as XPath to extract a value from the message and bind it to a method parameter. The default XPath annotation has SOAP and XML namespaces available. If you want to use your own namespace URIs in an XPath expression you can use your own copy of the XPath annotation to create whatever namespace prefixes you want to use. i.e. cut and paste upper code to your own project in a different package and/or annotation name then add whatever namespace prefix/uris you want in scope when you use your annotation on a method parameter. Then when you use your annotation on a method parameter all the namespaces you want will be available for use in your XPath expression. For example public class Foo { @MessageDriven(uri = "activemq:my.queue") public void doSomething(@MyXPath("/ns1:foo/ns2:bar/text()") String correlationID, @Body String body) { // process the inbound message here } } 376.10. Using XPathBuilder without an Exchange Available as of Camel 2.3 You can now use the org.apache.camel.builder.XPathBuilder without the need for an Exchange. This comes handy if you want to use it as a helper to do custom xpath evaluations. It requires that you pass in a CamelContext since a lot of the moving parts inside the XPathBuilder requires access to the Camel Type Converter and hence why CamelContext is needed. For example you can do something like this: boolean matches = XPathBuilder.xpath("/foo/bar/@xyz").matches(context, "<foo><bar xyz='cheese'/></foo>")); This will match the given predicate. You can also evaluate for example as shown in the following three examples: String name = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>cheese</bar></foo>", String.class); Integer number = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>123</bar></foo>", Integer.class); Boolean bool = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>true</bar></foo>", Boolean.class); Evaluating with a String result is a common requirement and thus you can do it a bit simpler: String name = XPathBuilder.xpath("foo/bar").evaluate(context, "<foo><bar>cheese</bar></foo>"); 376.11. Using Saxon with XPathBuilder Available as of Camel 2.3 You need to add camel-saxon as dependency to your project. Its now easier to use Saxon with the XPathBuilder which can be done in several ways as shown below. Where as the latter ones are the easiest ones. Using a factory Using ObjectModel The easy one 376.12. Setting a custom XPathFactory using System Property Available as of Camel 2.3 Camel now supports reading the JVM system property javax.xml.xpath.XPathFactory that can be used to set a custom XPathFactory to use. This unit test shows how this can be done to use Saxon instead: Camel will log at INFO level if it uses a non default XPathFactory such as: To use Apache Xerces you can configure the system property 376.13. Enabling Saxon from Spring DSL Available as of Camel 2.10 Similarly to Java DSL, to enable Saxon from Spring DSL you have three options: Specifying the factory <xpath factoryRef="saxonFactory" resultType="java.lang.String">current-dateTime()</xpath> Specifying the object model <xpath objectModel="http://saxon.sf.net/jaxp/xpath/om" resultType="java.lang.String">current-dateTime()</xpath> Shortcut <xpath saxon="true" resultType="java.lang.String">current-dateTime()</xpath> 376.14. Namespace auditing to aid debugging Available as of Camel 2.10 A large number of XPath-related issues that users frequently face are linked to the usage of namespaces. You may have some misalignment between the namespaces present in your message and those that your XPath expression is aware of or referencing. XPath predicates or expressions that are unable to locate the XML elements and attributes due to namespaces issues may simply look like "they are not working", when in reality all there is to it is a lack of namespace definition. Namespaces in XML are completely necessary, and while we would love to simplify their usage by implementing some magic or voodoo to wire namespaces automatically, truth is that any action down this path would disagree with the standards and would greatly hinder interoperability. Therefore, the utmost we can do is assist you in debugging such issues by adding two new features to the XPath Expression Language and are thus accesible from both predicates and expressions. #=== Logging the Namespace Context of your XPath expression/predicate Every time a new XPath expression is created in the internal pool, Camel will log the namespace context of the expression under the org.apache.camel.builder.xml.XPathBuilder logger. Since Camel represents Namespace Contexts in a hierarchical fashion (parent-child relationships), the entire tree is output in a recursive manner with the following format: Any of these options can be used to activate this logging: Enable TRACE logging on the org.apache.camel.builder.xml.XPathBuilder logger, or some parent logger such as org.apache.camel or the root logger Enable the logNamespaces option as indicated in Auditing Namespaces , in which case the logging will occur on the INFO level 376.15. Auditing namespaces Camel is able to discover and dump all namespaces present on every incoming message before evaluating an XPath expression, providing all the richness of information you need to help you analyse and pinpoint possible namespace issues. To achieve this, it in turn internally uses another specially tailored XPath expression to extract all namespace mappings that appear in the message, displaying the prefix and the full namespace URI(s) for each individual mapping. Some points to take into account: The implicit XML namespace (xmlns:xml="http://www.w3.org/XML/1998/namespace") is suppressed from the output because it adds no value Default namespaces are listed under the DEFAULT keyword in the output Keep in mind that namespaces can be remapped under different scopes. Think of a top-level 'a' prefix which in inner elements can be assigned a different namespace, or the default namespace changing in inner scopes. For each discovered prefix, all associated URIs are listed. You can enable this option in Java DSL and Spring DSL. Java DSL: XPathBuilder.xpath("/foo:person/@id", String.class).logNamespaces() Spring DSL: <xpath logNamespaces="true" resultType="String">/foo:person/@id</xpath> The result of the auditing will be appear at the INFO level under the org.apache.camel.builder.xml.XPathBuilder logger and will look like the following: 376.16. Loading script from external resource Available as of Camel 2.11 You can externalize the script and have Camel load it from a resource such as "classpath:" , "file:" , or "http:" . This is done using the following syntax: "resource:scheme:location" , eg to refer to a file on the classpath you can do: .setHeader("myHeader").xpath("resource:classpath:myxpath.txt", String.class) 376.17. Dependencies The XPath language is part of camel-core.
[ "from(\"queue:foo\"). filter().xpath(\"//foo\")). to(\"queue:bar\")", "from(\"queue:foo\"). choice().xpath(\"//foo\")).to(\"queue:bar\"). otherwise().to(\"queue:others\");", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://activemq.apache.org/camel/schema/spring\" xmlns:foo=\"http://example.com/person\"> <route> <from uri=\"activemq:MyQueue\"/> <filter> <xpath>/foo:person[@name='James']</xpath> <to uri=\"mqseries:SomeOtherQueue\"/> </filter> </route> </camelContext> </beans>", "xpath(\"/foo:person/@id\", String.class)", "<xpath resultType=\"java.lang.String\">/foo:person/@id</xpath>", "@XPath(value = \"concat('foo-',//order/name/)\", resultType = String.class) String name)", "xpath(\"/invoice/@orderType = 'premium'\", \"invoiceDetails\")", "public class Foo { @MessageDriven(uri = \"activemq:my.queue\") public void doSomething(@MyXPath(\"/ns1:foo/ns2:bar/text()\") String correlationID, @Body String body) { // process the inbound message here } }", "boolean matches = XPathBuilder.xpath(\"/foo/bar/@xyz\").matches(context, \"<foo><bar xyz='cheese'/></foo>\"));", "String name = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>cheese</bar></foo>\", String.class); Integer number = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>123</bar></foo>\", Integer.class); Boolean bool = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>true</bar></foo>\", Boolean.class);", "String name = XPathBuilder.xpath(\"foo/bar\").evaluate(context, \"<foo><bar>cheese</bar></foo>\");", "XPathBuilder INFO Using system property javax.xml.xpath.XPathFactory:http://saxon.sf.net/jaxp/xpath/om with value: net.sf.saxon.xpath.XPathFactoryImpl when creating XPathFactory", "-Djavax.xml.xpath.XPathFactory=org.apache.xpath.jaxp.XPathFactoryImpl", "<xpath factoryRef=\"saxonFactory\" resultType=\"java.lang.String\">current-dateTime()</xpath>", "<xpath objectModel=\"http://saxon.sf.net/jaxp/xpath/om\" resultType=\"java.lang.String\">current-dateTime()</xpath>", "<xpath saxon=\"true\" resultType=\"java.lang.String\">current-dateTime()</xpath>", "[me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}, {prefix -> namespace}], [parent: [me: {prefix -> namespace}]]]", "XPathBuilder.xpath(\"/foo:person/@id\", String.class).logNamespaces()", "<xpath logNamespaces=\"true\" resultType=\"String\">/foo:person/@id</xpath>", "2012-01-16 13:23:45,878 [stSaxonWithFlag] INFO XPathBuilder - Namespaces discovered in message: {xmlns:a=[http://apache.org/camel], DEFAULT=[http://apache.org/default], xmlns:b=[http://apache.org/camelA, http://apache.org/camelB]}", ".setHeader(\"myHeader\").xpath(\"resource:classpath:myxpath.txt\", String.class)" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xpath-language
Chapter 8. The bootable JAR
Chapter 8. The bootable JAR You can build and package a microservices application as a bootable JAR with the JBoss EAP JAR Maven plug-in. You can then run the application on a JBoss EAP bare-metal platform or a JBoss EAP OpenShift platform. 8.1. About the bootable JAR You can build and package a microservices application as a bootable JAR with the JBoss EAP JAR Maven plug-in. A bootable JAR contains a server, a packaged application, and the runtime required to launch the server. The JBoss EAP JAR Maven plug-in uses Galleon trimming capability to reduce the size and memory footprint of the server. Thus, you can configure the server according to your requirements, including only the Galleon layers that provide the capabilities that you need. The JBoss EAP JAR Maven plug-in supports the execution of JBoss EAP CLI script files to customize your server configuration. A CLI script includes a list of CLI commands for configuring the server. A bootable JAR is like a standard JBoss EAP server in the following ways: It supports JBoss EAP common management CLI commands. It can be managed using the JBoss EAP management console. The following limitations exist when packaging a server in a bootable JAR: CLI management operations that require a server restart are not supported. The server cannot be restarted in admin-only mode, which is a mode that starts services related to server administration. If you shut down the server, updates that you applied to the server are lost. Additionally, you can provision a hollow bootable JAR. This JAR contains only the server, so you can reuse the server to run a different application. Additional resources For information about capability trimming, see Capability Trimming . 8.2. JBoss EAP Maven plug-in You can use the JBoss EAP JAR Maven plug-in to build an application as a bootable JAR. You can retrieve the latest Maven plug-in version from the Maven repository, which is available at Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . In a Maven project, the src directory contains all the source files required to build your application. After the JBoss EAP JAR Maven plug-in builds the bootable JAR, the generated JAR is located in target/<application>-bootable.jar . The JBoss EAP JAR Maven plug-in also provides the following functionality: Applies CLI script commands to the server. Uses the org.jboss.eap:wildfly-galleon-pack Galleon feature pack and some of its layers for customizing the server configuration file. Supports the addition of extra files into the packaged bootable JAR, such as a keystore file. Includes the capability to create a hollow bootable JAR; that is, a bootable JAR that does not contain an application. After you use the JBoss EAP JAR Maven plug-in to create the bootable JAR, you can start the application by issuing the following command. Replace target/myapp-bootable.jar with the path to your bootable JAR. For example: Note To get a list of supported bootable JAR startup commands, append --help to the end of the startup command. For example, java -jar target/myapp-bootable.jar --help . Additional resources For information about supported JBoss EAP Galleon layers, see Available JBoss EAP layers . For information about supported Galleon plug-ins to build feature packs for your project, see the WildFly Galleon Maven Plugin Documentation . For information about selecting methods to configure the JBoss EAP Maven repository, see Use the Maven Repository . For information about Maven project directories, see Introduction to the Standard Directory Layout in the Apache Maven documentation. 8.3. Bootable JAR arguments View the arguments in the following table to learn about supported arguments for use with the bootable JAR. Table 8.1. Supported bootable JAR executable arguments Argument Description --help Display the help message for the specified command and exit. --deployment=<path> Argument specific to the hollow bootable JAR. Specifies the path to the WAR, JAR, EAR file or exploded directory that contains the application you want to deploy on a server. --display-galleon-config Print the content of the generated Galleon configuration file. --install-dir=<path> By default, the JVM settings are used to create a TEMP directory after the bootable JAR is started. You can use the --install-dir argument to specify a directory to install the server. -secmgr Runs the server with a security manager installed. -b<interface>=<value> Set system property jboss.bind.address.<interface> to the given value. For example, bmanagement=IP_ADDRESS . -b=<value> Set system property jboss.bind.address , which is used in configuring the bind address for the public interface. This defaults to 127.0.0.1 if no value is specified. -D<name>[=<value>] Specifies system properties that are set by the server at server runtime. The bootable JAR JVM does not set these system properties. --properties=<url> Loads system properties from a specified URL. -S<name>[=value] Set a security property. -u=<value> Set system property jboss.default.multicast.address , which is used in configuring the multicast address in the socket-binding elements in the configuration files. This defaults to 230.0.0.4 if no value is specified. --version Display the application server version and exit. 8.4. Specifying Galleon layers for your bootable JAR server You can specify Galleon layers to build a custom configuration for your server. Additionally, you can specify Galleon layers that you want excluded from the server. To reference a single feature pack, use the <feature-pack-location> element to specify its location. The following example specifies org.jboss.eap:wildfly-galleon-pack:3.0.0.GA-redhat-00001 in the <feature-pack-location> element of the Maven plug-in configuration file. <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:3.0.0.GA-redhat-00001</feature-pack-location> </configuration> If you need to reference more than one feature pack, list them in the <feature-packs> element. The following example shows the addition of the Red Hat Single Sign-On feature pack to the <feature-packs> element: <configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:3.0.0.GA-redhat-00001</location> </feature-pack> <feature-pack> <location>org.jboss.sso:keycloak-adapter-galleon-pack:9.0.10.redhat-00001</location> </feature-pack> </feature-packs> </configuration> You can combine Galleon layers from multiple feature packs to configure the bootable JAR server to include only the supported Galleon layers that provide the capabilities that you need. Note On a bare-metal platform, if you do not specify Galleon layers in your configuration file then the provisioned server contains a configuration identical to that of a default standalone-microprofile.xml configuration. On an OpenShift platform, after you have added the <cloud/> configuration element in the plug-in configuration and you choose not to specify Galleon layers in your configuration file, the provisioned server contains a configuration that is adjusted for the cloud environment and is similar to a default standalone-microprofile-ha.xml . Prerequisites Maven is installed. You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Identify the supported JBoss EAP Galleon layers that provide the capabilities that you need to run your application. Reference a JBoss EAP feature pack location in the <plugin> element of the Maven project pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack, as demonstrated in the following example. The following example also displays the inclusion of a single feature-pack, which includes the jaxrs-server base layer and the jpa-distributed layer . The jaxrs-server base layer provides additional support for the server. <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>jpa-distributed</layer> </layers> <excluded-layers> <layer>jpa</layer> </excluded-layers> ... </plugins> This example also shows the exclusion of the jpa layer from the project. Note If you include the jpa-distributed layer in your project, you must exclude the jpa layer from the jaxrs-server layer. The jpa layer configures a local infinispan hibernate cache, while the jpa-distributed layer configures a remote infinispan hibernate cache. Additional resources For information about available base layers, see Base layers . For information about supported Galleon plug-ins to build feature packs for your project, see the WildFly Galleon Maven Plugin Documentation . For information about selecting methods to configure the JBoss EAP Maven repository, see Maven and the JBoss EAP MicroProfile Maven repository . For information about managing your Maven dependencies, see Dependency Management in the Apache Maven Project documentation. 8.5. Using a bootable JAR on a JBoss EAP bare-metal platform You can package an application as a bootable JAR on a JBoss EAP bare-metal platform. A bootable JAR contains a server, a packaged application, and the runtime required to launch the server. This procedure demonstrates packaging the MicroProfile Config microservices application as a bootable JAR with the JBoss EAP JAR Maven plug-in. See MicroProfile Config development . You can use CLI scripts to configure the server during the packaging of the bootable JAR. Important On building a web application that must be packaged inside a bootable JAR, you must specify war in the <packaging> element of your pom.xml file. For example: <packaging>war</packaging> This value is required to package the build application as a WAR file and not as the default JAR file. In a Maven project that is used solely to build a hollow bootable JAR, set the packaging value to pom . For example: <packaging>pom</packaging> You are not limited to using pom packaging when you build a hollow bootable JAR for a Maven project. You can create one by specifying true in the <hollow-jar> element for any type of packaging, such as war . See Creating a hollow bootable JAR on a JBoss EAP bare-metal platform . Prerequisites You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . You have created a Maven project, set up a parent dependency, and added dependencies for creating an MicroProfile application. See MicroProfile Config development . Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>microprofile-platform</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> Note If you do not specify Galleon layers in your pom.xml file then the bootable JAR server contains a configuration that is identical to a standalone-microprofile.xml configuration. Package the application as a bootable JAR: Start the application: USD NAME="foo" java -jar target/microprofile-config-bootable.jar Note The example uses NAME as the environment variable, but you can choose to use jim , which is the default value. Note To view a list of supported bootable JAR arguments, append --help to the end of the java -jar target/microprofile-config-bootable.jar command. Specify the following URL in your web browser to access the MicroProfile Config application: http://localhost:8080/config/json Verification: Test the application behaves properly by issuing the following command in your terminal: curl http://localhost:8080/config/json The following is the expected output: {"result":"Hello foo"} Additional resources For information about available MicroProfile Config functionality, see MicroProfile Config . For information about ConfigSources , see MicroProfile Config reference . 8.6. Creating a hollow bootable JAR on a JBoss EAP bare-metal platform You can package an application as a hollow bootable JAR on a JBoss EAP bare-metal platform. A hollow bootable JAR contains only the JBoss EAP server. The hollow bootable JAR is packaged by the JBoss EAP JAR Maven plug-in. The application is provided at server runtime. The hollow bootable JAR is useful if you need to re-use the server configuration for a different application. Prerequisites You have created a Maven project, set up a parent dependency, and added dependencies for creating an application. See MicroProfile Config development . You have completed the pom.xml file configuration steps outlined in Using a bootable JAR on a JBoss EAP bare-metal platform . You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . Note The example shown in the procedure specifies USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version, but you must set the property in your project. For example: <properties> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure To build a hollow bootable JAR, you must set the <hollow-jar> plug-in configuration element to true in the project pom.xml file. For example: <plugins> <plugin> ... <configuration> <!-- This example configuration does not show a complete plug-in configuration --> ... <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <hollow-jar>true</hollow-jar> </configuration> </plugin> </plugins> Note By specifying true in the <hollow-jar> element, the JBoss EAP JAR Maven plug-in does not include an application in the JAR. Build the hollow bootable JAR: Run the hollow bootable JAR: Important To specify the path to the WAR file that you want to deploy on the server, use the following argument, where <PATH_NAME> is the path to your deployment. --deployment=<PATH_NAME> Access the application: Note To register your web application in the root directory, name the application ROOT.war . Additional resources For information about available MicroProfile functionality, see MicroProfile Config . For more information about the JBoss EAP JAR Maven plug-in supported in JBoss EAP XP 3.0.0, see JBoss EAP Maven plug-in. 8.7. CLI scripts You can create CLI scripts to configure the server during the packaging of the bootable JAR. A CLI script is a text file that contains a sequence of CLI commands that you can use to apply additional server configurations. For example, you can create a script to add a new logger to the logging subsystem. You can also specify more complex operations in a CLI script. For example, you can group security management operations into a single command to enable HTTP authentication for the management HTTP endpoint. Note You must define CLI scripts in the <cli-session> element of the plug-in configuration before you package an application as a bootable JAR. This ensures the server configuration settings persist after packaging the bootable JAR. Although you can combine predefined Galleon layers to configure a server that deploys your application, limitations do exist. For example, you cannot enable the HTTPS undertow listener using Galleon layers when packaging the bootable JAR. Instead, you must use a CLI script. You must define the CLI scripts in the <cli-session> element of the pom.xml file. The following table shows types of CLI session attributes: Table 8.2. CLI script attributes Argument Description script-files List of paths to script files. properties-file Optional attribute that specifies a path to a properties file. This file lists Java properties that scripts can reference by using the USD{my.prop} syntax. The following example sets public inet-address to the value of all.addresses : /interface=public:write-attribute(name=inet-address,value=USD{all.addresses}) resolve-expressions Optional attribute that contains a boolean value. Indicates if system properties or expressions are resolved before sending the operation requests to the server. Value is true by default. Note CLI scripts are started in the order that they are defined in the <cli-session> element of the pom.xml file. The JBoss EAP JAR Maven plug-in starts the embedded server for each CLI session. Thus, your CLI script does not have to start or stop the embedded server. 8.8. Using a bootable JAR on a JBoss EAP OpenShift platform After you packaged an application as a bootable JAR, you can run the application on a JBoss EAP OpenShift platform. Important On OpenShift, you cannot use the EAP Operator automated transaction recovery feature with your bootable JAR. A fix for this technical limitation is planned for a future JBoss EAP XP 3.0.0 patch release. Prerequisites You have created a Maven project for MicroProfile Config development . You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP 3 and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>microprofile-platform</layer> </layers> <cloud/> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> Note You must include the <cloud/> element in the <configuration> element of the plug-in configuration, so the JBoss EAP Maven JAR plug-in can identify that you choose the OpenShift platform. Package the application: Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. For example: Enter the following oc commands to create an application image: 1 Creates an openshift sub-directory in the target directory. The packaged application is copied into the created sub-directory. 2 Imports the latest OpenJDK 11 imagestream tag and image information into the OpenShift project. 3 Creates a build configuration based on the microprofile-config-app directory and the OpenJDK 11 imagestream. 4 Uses the target/openshift sub-directory as the binary input to build the application. Note OpenShift applies a set of CLI script commands to the bootable JAR configuration file to adjust it to the cloud environment. You can access this script by opening the bootable-jar-build-artifacts/generated-cli-script.txt file in the Maven project /target directory . Verification: View a list of OpenShift pods available and check the pods build statuses by issuing the following command: Verify the built application image: The output shows the built application image details, such as name and image repository, tag, and so on. For the example in this procedure, the imagestream name and tag output displays microprofile-config-app:latest . Deploy the application: Important To provide system properties to the bootable JAR, you must use the JAVA_OPTS_APPEND environment variable. The following example demonstrates usage of the JAVA_OPTS_APPEND environment variable: A new application is created and started. The application configuration is exposed as a new service. Verification : Test the application behaves properly by issuing the following command in your terminal: Expected output: Additional resources For information about MicroProfile, see MicroProfile Config . For information about ConfigSources , see Default MicroProfile Config attributes . 8.9. Configure the bootable JAR for OpenShift Before using your bootable JAR, you can configure JVM settings to ensure that your standalone server operates correctly on JBoss EAP for OpenShift. Use the JAVA_OPTS_APPEND environment variable to configure JVM settings. Use the JAVA_ARGS command to provide arguments to the bootable JAR. You can use environment variables to set values for properties. For example, you can use the JAVA_OPTS_APPEND environment variable to set the -Dwildfly.statistics-enabled property to true : Statistics are now enabled for your server. Note Use the JAVA_ARGS environment variable, if you need to provide arguments to the bootable JAR. JBoss EAP for OpenShift provides a JDK 11 image. To run the application associated with your bootable JAR, you must first import the latest OpenJDK 11 imagestream tag and image information into your OpenShift project. You can then use environment variables to configure the JVM in the imported image. You can apply the same configuration options for configuring the JVM used for JBoss EAP for OpenShift S2I image, but with the following differences: Optional: The -Xlog capability is not available, but you can set garbage collection logging by enabling -Xlog:gc . For example: JAVA_OPTS_APPEND="-Xlog:gc*:file=/tmp/gc.log:time" . To increase initial metaspace size, you can set the GC_METASPACE_SIZE environment variable. For best metadata capacity performance, set the value to 96 . The default value for GC_MAX_METASPACE_SIZE is set as 100 , but for best metadata capacity after a garbage collection, you must set it to at least 256 . For better random file generation, use the JAVA_OPTS_APPEND environment variable to set java.security.egd property as -Djava.security.egd=file:/dev/urandom . These configurations improve the memory settings and garbage collection capability of JVM when running on your imported OpenJDK 11 image. 8.10. Using a ConfigMap in your application on OpenShift For OpenShift, you can use a deployment controller (dc) to mount the configmap into the pods used to run the application. A ConfigMap is an OpenShift resource that is used to store non-confidential data in key-value pairs. After you specify the microprofile-platform Galleon layer to add microprofile-config-smallrye subsystem and any extensions to the server configuration file, you can use a CLI script to add a new ConfigSource to the server configuration. You can save CLI scripts in an accessible directory, such as the /scripts directory, in the root directory of your Maven project. MicroProfile Config functionality is implemented in JBoss EAP using the SmallRye Config component and is provided by the microprofile-config-smallrye subsystem. This subsystem is included in the microprofile-platform Galleon layer. Prerequisites You have installed Maven. You have configured the JBoss EAP Maven repository. You have packaged an application as a bootable JAR and you can run the application on a JBoss EAP OpenShift platform. For information about building an application as a bootable JAR on an OpenShift platform, see Using a bootable JAR on a JBoss EAP OpenShift platform . Procedure Create a directory named scripts at the root directory of your project. For example: USD mkdir scripts Create a cli.properties file and save the file in the /scripts directory. Define the config.path and the config.ordinal system properties in this file. For example: Create a CLI script, such as mp-config.cli , and save it in an accessible directory in the bootable JAR, such as the /scripts directory. The following example shows the contents of the mp-config.cli script: The mp-config.cli CLI script creates a new ConfigSource , to which ordinal and path values are retrieved from a properties file. Save the script in the /scripts directory, which is located at the root directory of the project. Add the following configuration extract to the existing plug-in <configuration> element: <cli-sessions> <cli-session> <properties-file> scripts/cli.properties </properties-file> <script-files> <script>scripts/mp-config.cli</script> </script-files> </cli-session> </cli-sessions> Package the application: Log in to your OpenShift instance using the oc login command. Optional: If you have not previously created a target/openshift subdirectory, you must create the suddirectory by issuing the following command: Copy the packaged application into the created subdirectory. Use the target/openshift subdirectory as the binary input to build the application: Note OpenShift applies a set of CLI script commands to the bootable JAR configuration file to enable it for the cloud environment. You can access this script by opening the bootable-jar-build-artifacts/generated-cli-script.txt file in the Maven project /target directory. Create a ConfigMap . For example: Mount the ConfigMap into the application with the dc. For example: After executing the oc set volume command, the application is re-deployed with the new configuration settings. Test the output: USD curl http://USD(oc get route microprofile-config-app --template='{{ .spec.host }}')/config/json The following is the expected output: {"result":"Hello Name comes from Openshift ConfigMap"} Additional resources For information about MicroProfile Config ConfigSources attributes, see Default MicroProfile Config attributes . For information about bootable JAR arguments, see Supported bootable JAR arguments . 8.11. Creating a bootable JAR Maven project Follow the steps in the procedure to create an example Maven project. You must create a Maven project before you can perform the following procedures: Enabling JSON logging for your bootable JAR Enabling web session data storage for multiple bootable JAR instances Enabling HTTP authentication for bootable JAR with a CLI script Securing your JBoss EAP bootable JAR application with Red Hat Single Sign-On In the project pom.xml file, you can configure Maven to retrieve the project artifacts required to build your bootable JAR. Procedure Set up the Maven project: USD mvn archetype:generate \ -DgroupId=GROUP_ID \ -DartifactId=ARTIFACT_ID \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-webapp \ -DinteractiveMode=false Where GROUP_ID is the groupId of your project and ARTIFACT_ID is the artifactId of your project. In the pom.xml file, configure Maven to retrieve the JBoss EAP BOM file from a remote repository. <repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> To configure Maven to automatically manage versions for the Jakarta EE artifacts in the jboss-eap-jakartaee8 BOM, add the BOM to the <dependencyManagement> section of the project pom.xml file. For example: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.3.4.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Add the servlet API artifact, which is managed by the BOM, to the <dependency> section of the project pom.xml file, as shown in the following example: <dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency> Additional resources For information about the JBoss EAP Maven plug-in, see JBoss EAP Maven plug-in . For information about the Galleon layers, see Specifying Galleon layers for your bootable JAR server . For information about including the Red Hat Single Sign-On Galleon feature pack in your project, see Securing your JBoss EAP bootable JAR application with Red Hat Single Sign-On . 8.12. Enabling JSON logging for your bootable JAR You can enable JSON logging for your bootable JAR by configuring the server logging configuration with a CLI script. When you enable JSON logging, you can use the JSON formatter to view log messages in JSON format. The example in this procedure shows you how to enable JSON logging for your bootable JAR on a bare-metal platform and an OpenShift platform. Prerequisites You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the minor version of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . You have created a Maven project, set up a parent dependency, and added dependencies for creating an application. See Creating a bootable JAR Maven project . Important In the Maven archetype of your Maven project, you must specify the groupID and artifactID that are specific to your project. For example: Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Add the JBoss Logging and Jakarta RESTful Web Services dependencies, which are managed by the BOM, to the <dependencies> section of the project pom.xml file. For example: <dependencies> <dependency> <groupId>org.jboss.logging</groupId> <artifactId>jboss-logging</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.1_spec</artifactId> <scope>provided</scope> </dependency> </dependencies> Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</location> </feature-pack> </feature-packs> <layers> <layer>jaxrs-server</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> Create the directory to store Java files: Where APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. Create a Java file RestApplication.java with the following content and save the file in the APPLICATION_ROOT/src/main/java/com/example/logging/ directory: package com.example.logging; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath("/") public class RestApplication extends Application { } Create a Java file HelloWorldEndpoint.java with the following content and save the file in the APPLICATION_ROOT/src/main/java/com/example/logging/ directory: package com.example.logging; import javax.ws.rs.Path; import javax.ws.rs.core.Response; import javax.ws.rs.GET; import javax.ws.rs.Produces; import org.jboss.logging.Logger; @Path("/hello") public class HelloWorldEndpoint { private static Logger log = Logger.getLogger(HelloWorldEndpoint.class.getName()); @GET @Produces("text/plain") public Response doGet() { log.debug("HelloWorldEndpoint.doGet called"); return Response.ok("Hello from XP bootable jar!").build(); } } Create a CLI script, such as logging.cli , and save it in an accessible directory in the bootable JAR, such as the APPLICATION_ROOT /scripts directory, where APPLICATION_ROOT is the root directory of your Maven project. The script must contain the following commands: Add the following configuration extract to the plug-in <configuration> element: <cli-sessions> <cli-session> <script-files> <script>scripts/logging.cli</script> </script-files> </cli-session> </cli-sessions> This example shows the logging.cli CLI script, which modifies the server logging configuration file to enable JSON logging for your application. Package the application as a bootable JAR. Optional : To run the application on a JBoss EAP bare-metal platform, follow the steps outlined in Using a bootable JAR on a JBoss EAP bare-metal platform , but with the following difference: Start the application: Verification: You can access the application by specifying the following URL in your browser: http://127.0.0.1:8080/hello . Expected output: You can view the JSON-formatted logs, including the com.example.logging.HelloWorldEndpoint debug trace, in the application console. Optional : To run the application on a JBoss EAP OpenShift platform, complete the following steps: Add the <cloud/> element to the plug-in configuration. For example: <plugins> <plugin> ... <!-- You must evolve the existing configuration with the <cloud/> element --> <configuration > ... <cloud/> </configuration> </plugin> </plugins> Rebuild the application: Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. For example: Enter the following oc commands to create an application image: 1 Creates the target/openshift subdirectory. The packaged application is copied into the openshift subdirectory. 2 Imports the latest OpenJDK 11 imagestream tag and image information into the OpenShift project. 3 Creates a build configuration based on the logging directory and the OpenJDK 11 imagestream. 4 Uses the target/openshift subdirectory as the binary input to build the application. Deploy the application: Get the URL of the route. Access the application in your web browser using the URL returned from the command. For example: Verification: Issue the following command to view a list of OpenShift pods available, and to check the pods build statuses: Access a running pod log of your application. Where APP_POD_NAME is the name of the running pod logging application. Expected outcome: The pod log is in JSON format and includes the com.example.logging.HelloWorldEndpoint debug trace. Additional resources For information about logging functionality for JBoss EAP, see Logging with JBoss EAP in the Configuration Guide . For information about using a bootable JAR on OpenShift, see Using a bootable JAR on a JBoss EAP OpenShift platform . For information about specifying the JBoss EAP JAR Maven for your project, see Specifying Galleon layers for your bootable JAR server . For information about creating CLI scripts, see CLI scripts . 8.13. Enabling web session data storage for multiple bootable JAR instances You can build and package a web-clustering application as a bootable JAR. Prerequisites You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . You have created a Maven project, set up a parent dependency, and added dependencies for creating a web-clustering application. See Creating a bootable JAR Maven project . Important When setting up the Maven project, you must specify values in the Maven archetype configuration. For example: USD mvn archetype:generate \ -DgroupId=com.example.webclustering \ -DartifactId=web-clustering \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-webapp \ -DinteractiveMode=false cd web-clustering Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>datasources-web-server</layer> <layer>web-clustering</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> Note This example makes use of the web-clustering Galleon layer to enable web session sharing. Update the web.xml file in the src/main/webapp/WEB-INF directory with the following configuration: <?xml version="1.0" encoding="UTF-8"?> <web-app version="4.0" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd"> <distributable/> </web-app> The <distributable/> tag indicates that this servlet can be distributed across multiple servers. Create the directory to store Java files: Where APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. Create a Java file MyServlet.java with the following content and save the file in the APPLICATION_ROOT /src/main/java/com/example/webclustering/ directory. package com.example.webclustering; import java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @WebServlet(urlPatterns = {"/clustering"}) public class MyServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType("text/html;charset=UTF-8"); long t; User user = (User) request.getSession().getAttribute("user"); if (user == null) { t = System.currentTimeMillis(); user = new User(t); request.getSession().setAttribute("user", user); } try (PrintWriter out = response.getWriter()) { out.println("<!DOCTYPE html>"); out.println("<html>"); out.println("<head>"); out.println("<title>Web clustering demo</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Session id " + request.getSession().getId() + "</h1>"); out.println("<h1>User Created " + user.getCreated() + "</h1>"); out.println("<h1>Host Name " + System.getenv("HOSTNAME") + "</h1>"); out.println("</body>"); out.println("</html>"); } } } The content in MyServlet.java defines the endpoint to which a client sends an HTTP request. Create a Java file User.java with the following content and save the file in the APPLICATION_ROOT /src/main/java/com/example/webclustering/ directory. package com.example.webclustering; import java.io.Serializable; public class User implements Serializable { private final long created; User(long created) { this.created = created; } public long getCreated() { return created; } } Package the application: Optional : To run the application on a JBoss EAP bare-metal platform, follow the steps outlined in Using a bootable JAR on a JBoss EAP bare-metal platform , but with the following difference: On a JBoss EAP bare-metal platform, you can use the java -jar command to run multiple bootable JAR instances, as demonstrated in the following examples: Verification : You can access the application on the node 1 instance: http://127.0.0.1:8080/clustering . Note the user session ID and the user-creation time. After you kill this instance, you can access the node 2 instance: http://127.0.0.1:8090/clustering . The user must match the session ID and the user-creation time of the node 1 instance. Optional : To run the application on a JBoss EAP OpenShift platform, follow the steps outlined in Using a bootable JAR on a JBoss EAP OpenShift platform , but complete the following steps: Add the <cloud/> element to the plug-in configuration. For example: <plugins> <plugin> ... <!-- You must evolve the existing configuration with the <cloud/> element --> <configuration > ... <cloud/> </configuration> </plugin> </plugins> Rebuild the application: Log in to your OpenShift instance using the oc login command. Create a new project in OpenShift. For example: To run a web-clustering application on a JBoss EAP OpenShift platform, authorization access must be granted for the service account that the pod is running in. The service account can then access the Kubernetes REST API. The following example shows authorization access being granted to a service account: Enter the following oc commands to create an application image: 1 Creates the target/openshift sub-directory. The packaged application is copied into the openshift sub-directory. 2 Imports the latest OpenJDK 11 imagestream tag and image information into the OpenShift project. 3 Creates a build configuration based on the web-clustering directory and the OpenJDK 11 imagestream. 4 Uses the target/openshift sub-directory as the binary input to build the application. Deploy the application: Important You must use the KUBERNETES_NAMESPACE environment variable to view other pods in the current OpenShift namespace; otherwise, the server attempts to retrieve the pods from the default namespace. Get the URL of the route. Access the application in your web browser using the URL returned from the command. For example: Note the user session ID and user creation time. Scale the application to two pods: Issue the following command to view a list of OpenShift pods available, and to check the pods build statuses: Kill the oldest pod using the oc delete pod web-clustering- POD_NAME command, where POD_NAME is the name of your oldest pod. Access the application again: Expected outcome: The session ID and the creation time generated by the new pod match those of the of the terminated pod. This indicates that web session data storage is enabled. Additional resources For information about distributable web session management profiles, see The distributable-web subsystem for Distributable Web Session Configurations in the Development Guide . For information about configuring the JGroups protocol stack, see Configuring a JGroups Discovery Mechanism in the Getting Started with JBoss EAP for OpenShift Container Platform guide. 8.14. Enabling HTTP authentication for bootable JAR with a CLI script You can enable HTTP authentication for the bootable JAR with a CLI script. This script adds a security realm and a security domain to your server. Prerequisites You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . You have created a Maven project, set up a parent dependency, and added dependencies for creating an application that requires HTTP authentication. See Creating a bootable JAR Maven project . Important When setting up the Maven project, you must specify HTTP authentication values in the Maven archetype configuration. For example: USD mvn archetype:generate \ -DgroupId=com.example.auth \ -DartifactId=authentication \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-webapp \ -DinteractiveMode=false cd authentication Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties> Procedure Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>datasources-web-server</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> The example shows the inclusion of the datasources-web-server Galleon layer that contains the elytron subsystem. Update the web.xml file in the src/main/webapp/WEB-INF directory. For example: <?xml version="1.0" encoding="UTF-8"?> <web-app version="4.0" xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd"> <login-config> <auth-method>BASIC</auth-method> <realm-name>Example Realm</realm-name> </login-config> </web-app> Create the directory to store Java files: Where APPLICATION_ROOT is the root directory of your Maven project. Create a Java file TestServlet.java with the following content and save the file in the APPLICATION_ROOT/src/main/java/com/example/authentication/ directory. package com.example.authentication; import javax.servlet.annotation.HttpMethodConstraint; import javax.servlet.annotation.ServletSecurity; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.io.PrintWriter; @WebServlet(urlPatterns = "/hello") @ServletSecurity(httpMethodConstraints = { @HttpMethodConstraint(value = "GET", rolesAllowed = { "Users" }) }) public class TestServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { PrintWriter writer = resp.getWriter(); writer.println("Hello " + req.getUserPrincipal().getName()); writer.close(); } } Create a CLI script, such as authentication.cli , and save it in an accessible directory in the bootable JAR, such as the APPLICATION_ROOT /scripts directory. The script must contain the following commands: Add the following configuration extract to the plug-in <configuration> element: <cli-sessions> <cli-session> <script-files> <script>scripts/authentication.cli</script> </script-files> </cli-session> </cli-sessions> This example shows the authentication.cli CLI script, which configures the default undertow security domain to the security domain defined for your server. In the root directory of your Maven project create a directory to store the properties files that the JBoss EAP JAR Maven plug-in adds to the bootable JAR: USD mkdir -p APPLICATION_ROOT/extra-content/standalone/configuration/ Where APPLICATION_ROOT is the directory containing the pom.xml configuration file for the application. This directory stores files such as bootable-users.properties and bootable-groups.properties files. The bootable-users.properties file contains the following content: The bootable-groups.properties file contains the following content: Add the following extra-content-content-dirs element to the existing <configuration> element: <extra-server-content-dirs> <extra-content>extra-content</extra-content> </extra-server-content-dirs> The extra-content directory contains the properties files. Package the application as a bootable JAR. Start the application: Call the servlet, but do not specify credentials: Expected output: Call the server and specify your credentials. For example: A HTTP 200 status is returned that indicates HTTP authentication is enabled for your bootable JAR. For example: Additional resources For information about enabling HTTP authentication for the undertow security domain, see Enable HTTP Authentication for Applications Using the CLI Security Command in the How to Configure Server Security . 8.15. Securing your JBoss EAP bootable JAR application with Red Hat Single Sign-On You can use the Galleon keycloak-client-oidc layer to install a version of a server that is provisioned with Red Hat Single Sign-On 7.4 OpenID Connect client adapters. The keycloak-client-oidc layer provides Red Hat Single Sign-On OpenID Connect client adapters to your Maven project. This layer is included with the keycloak-adapter-galleon-pack Red Hat Single Sign-On feature pack. You can add the keycloak-adapter-galleon-pack feature pack to your JBoss EAP Maven plug-in configuration and then add the keycloak-client-oidc . You can view Red Hat Single Sign-On client adapters that are compatible with JBoss EAP by visiting the Supported Configurations: Red Hat Single Sign-On 7.4 web page. The example in this procedure shows you how to secure a JBoss EAP bootable JAR by using JBoss EAP features provided by the keycloak-client-oidc layer. Prerequisites You have checked the latest Maven plug-in version, such as MAVEN_PLUGIN_VERSION .X.GA.Final-redhat-00001 , where MAVEN_PLUGIN_VERSION is the major version and X is the microversion. See Index of /ga/org/wildfly/plugins/wildfly-jar-maven-plugin . You have checked the latest Galleon feature pack version, such as 3.0.X.GA-redhat- BUILD_NUMBER , where X is the microversion of JBoss EAP XP and BUILD_NUMBER is the build number of the Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/eap/wildfly-galleon-pack . You have checked the latest Red Hat Single Sign-On Galleon feature pack version, such as org.jboss.sso:keycloak-adapter-galleon-pack:9.0.X:redhat-BUILD_NUMBER , where X is the microversion of Red Hat Single Sign-On that depends on the Red Hat Single Sign-On server release used to secure the application, and BUILD_NUMBER is the build number of the Red Hat Single Sign-On Galleon feature pack. Both X and BUILD_NUMBER can evolve during the JBoss EAP XP 3.0.0 product life cycle. See Index of /ga/org/jboss/sso/keycloak-adapter-galleon-pack . You have created a Maven project, set up a parent dependency, and added dependencies for creating an application that you want secured with Red Hat Single Sign-On. See Creating a bootable JAR Maven project . You have a Red Hat Single Sign-On server that is running on port 8090. See Starting the Red Hat Single Sign-On server. You have logged in to the Red Hat Single Sign-On Admin Console and created the following metadata: A realm named demo . A role named Users . A user and password. You must assign a Users role to the user. A public-client web application with a Root URL. The example in the procedure, defines simple-webapp as the web application and http://localhost:8080/simple-webapp/secured as the Root URL. Important When setting up the Maven project, you must specify values for the application that you want to secure with Red Hat Single Sign-On in the Maven archetype. For example: USD mvn archetype:generate \ -DgroupId=com.example.keycloak \ -DartifactId=simple-webapp \ -DarchetypeGroupId=org.apache.maven.archetypes \ -DarchetypeArtifactId=maven-archetype-webapp \ -DinteractiveMode=false cd simple-webapp Note The examples shown in the procedure specify the following properties: USD{bootable.jar.maven.plugin.version} for the Maven plug-in version. USD{jboss.xp.galleon.feature.pack.version} for the Galleon feature pack version. USD{keycloak.feature.pack.version} for the Red Hat Single Sign-On feature pack version. You must set these properties in your project. For example: <properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> <keycloak.feature.pack.version>9.0.10.redhat-00001</keycloak.feature.pack.version> </properties> Procedure Add the following content to the <build> element of the pom.xml file. You must specify the latest version of any Maven plug-in and the latest version of the org.jboss.eap:wildfly-galleon-pack Galleon feature pack. For example: <plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</location> </feature-pack> <feature-pack> <location>org.jboss.sso:keycloak-adapter-galleon-pack:USD{keycloak.feature.pack.version}</location> </feature-pack> </feature-packs> <layers> <layer>datasources-web-server</layer> <layer>keycloak-client-oidc</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> The Maven plug-in provisions subsystems and modules that are required for deploying the web application. The keycloak-client-oidc layer provides Red Hat Single Sign-On OpenID Connect client adapters to your project by using the keycloak subsystem and its dependencies to activate support for Red Hat Single Sign-On authentication. Red Hat Single Sign-On client adapters are libraries that secure applications and services with Red Hat Single Sign-On. In the project pom.xml file, set the <context-root> to false in your plug-in configuration. This registers the application in the simple-webapp resource path. By default, the WAR file is registered under the root-context path. <configuration> ... <context-root>false</context-root> ... </configuration> Create a CLI script, such as configure-oidc.cli and save it in an accessible directory in the bootable JAR, such as the APPLICATION_ROOT /scripts directory, where APPLICATION_ROOT is the root directory of your Maven project. The script must contain commands similar to the following example: This script example defines the secure-deployment=simple-webapp.war resource in the keycloak subsystem. The simple-webapp.war resource is the name of the WAR file that is deployed in the bootable JAR. In the project pom.xml file, add the following configuration extract to the existing plug-in <configuration> element: <cli-sessions> <cli-session> <script-files> <script>scripts/configure-oidc.cli</script> </script-files> </cli-session> </cli-sessions> Update the web.xml file in the src/main/webapp/WEB-INF directory. For example: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" metadata-complete="false"> <login-config> <auth-method>BASIC</auth-method> <realm-name>Simple Realm</realm-name> </login-config> </web-app> Optional: Alternatively to steps 7 through 9, you can embed the server configuration in the web application by adding the keycloak.json descriptor to the WEB-INF directory of the web application. For example: { "realm" : "demo", "resource" : "simple-webapp", "public-client" : "true", "auth-server-url" : "http://localhost:8090/auth/", "ssl-required" : "EXTERNAL" } You must then set the <auth-method> of the web application to KEYCLOAK . The following example code illustrates how to set the <auth-method> : <login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>Simple Realm</realm-name> </login-config> Create a Java file named SecuredServlet.java with the following content and save the file in the APPLICATION_ROOT /src/main/java/com/example/securedservlet/ directory. package com.example.securedservlet; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.HttpMethodConstraint; import javax.servlet.annotation.ServletSecurity; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @WebServlet("/secured") @ServletSecurity(httpMethodConstraints = { @HttpMethodConstraint(value = "GET", rolesAllowed = { "Users" }) }) public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println("<html>"); writer.println("<head><title>Secured Servlet</title></head>"); writer.println("<body>"); writer.println("<h1>Secured Servlet</h1>"); writer.println("<p>"); writer.print(" Current Principal '"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : "NO AUTHENTICATED USER"); writer.print("'"); writer.println(" </p>"); writer.println(" </body>"); writer.println("</html>"); } } } Package the application as a bootable JAR. Start the application. The following example starts the simple-webapp web application from its specified bootable JAR path: Specify the following URL in your web browser to access the webpage secured with Red Hat Single Sign-On. The following example shows the URL for the secured simple-webapp web application: Log in as a user from your Red Hat Single Sign-On realm. Verification: Check that the webpage displays the following output: Additional resources For information about configuring the Red Hat Single Sign-On adapter subsystem, see JBoss EAP Adapter in the Securing Applications and Services Guide . For information about specifying the JBoss EAP JAR Maven for your project, see Specifying Galleon layers for your bootable JAR server . 8.16. Packaging a bootable JAR in dev mode The JBoss EAP JAR Maven plug-in dev goal provides dev mode, Development Mode, which you can use to enhance your application development process. In dev mode, you do not need to rebuild the bootable JAR after you make changes to your application. The workflow in this procedure demonstrates using dev mode to configure a bootable JAR. Prerequisites Maven is installed. You have created a Maven project, set up a parent dependency, and added dependencies for creating an MicroProfile application. See MicroProfile Config development . You have specified the JBoss EAP JAR Maven plug-in in your Maven project pom.xml file. Procedure Build and start the bootable JAR in Development Mode: In dev mode, the server deployment scanner is configured to monitor the target/deployments directory. Prompt the JBoss EAP Maven Plug-in to build and copy your application to the target/deployments directory with the following command: The server packaged inside the bootable JAR deploys the application stored in the target/deployments directory. Modify the code in your application code. Use the mvn package -Ddev to prompt the JBoss EAP Maven Plug-in to re-build your application and re-deploy it. Stop the server. For example: After you complete your application changes, package your application as a bootable JAR: 8.17. Applying the JBoss EAP patch to your bootable JAR On a JBoss EAP bare-metal platform, you can install the patch to your bootable JAR by using a CLI script. The CLI script issues the patch apply command to apply the patch during the bootable JAR build. Important After you apply a patch to your bootable JAR, you cannot roll back from the applied patch. You must rebuild a bootable JAR without the patch. Additionally, you can apply a legacy patch to your bootable JAR with the JBoss EAP JAR Maven plug-in. This plug-in provides a <legacy-patch-cli-script> configuration option to reference the CLI script that is used to patch the server. Note The prefix legacy-* in <legacy-patch-cli-script> is related to applying archive patches to a bootable JAR. This method is similar to applying patches to regular JBoss EAP distributions. You can use the legacy-patch-cleanup option in the JBoss EAP JAR Maven plug-in configuration to reduce the memory footprint of the bootable JAR by removing unused patch content. The option removes unused module dependencies. This option is set as false by default in the patch configuration file. The legacy-patch-cleanup option removes the following patch content: The <JBOSS_HOME>/.installation/patches directory. Original locations of patch modules in the base layer. Unused modules that were added by the patch and are not referenced in the that existing module graph or patched modules graph. Overlays directories that are not listed in the .overlays file. Important The legacy-patch-clean-up option variable is provided as a Technology Preview. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Note The information outlined in this procedure also pertains to the hollow bootable JAR. Prerequisites You have set up an account on the Red Hat Customer Portal . You have downloaded the following files from the Product Downloads page: The JBoss EAP JBoss EAP 7.4.4 GA patch The JBoss EAP XP 3.0.0 patch Procedure Create a CLI script that defines the legacy patches you want to apply to your bootable JAR. The script must contain one or more patch apply commands. The --override-all command is required when patching a server that was trimmed with Galleon layers, for example: Reference your CLI script in the <legacy-patch-cli-script> element of your pom.xml file. Rebuild the bootable JAR. Additional resources For information about downloading the JBoss EAP MicroProfile Maven repository, see Downloading the JBoss EAP MicroProfile Maven repository patch as an archive file . For information about creating CLI scripts, see CLI Scripts . For information about Technology Preview features, see Technology Preview Features Support Scope on the Red Hat Customer Portal .
[ "java -jar target/myapp-bootable.jar", "<configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:3.0.0.GA-redhat-00001</feature-pack-location> </configuration>", "<configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:3.0.0.GA-redhat-00001</location> </feature-pack> <feature-pack> <location>org.jboss.sso:keycloak-adapter-galleon-pack:9.0.10.redhat-00001</location> </feature-pack> </feature-packs> </configuration>", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>jpa-distributed</layer> </layers> <excluded-layers> <layer>jpa</layer> </excluded-layers> </plugins>", "<packaging>war</packaging>", "<packaging>pom</packaging>", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>microprofile-platform</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "mvn package", "NAME=\"foo\" java -jar target/microprofile-config-bootable.jar", "http://localhost:8080/config/json", "curl http://localhost:8080/config/json", "{\"result\":\"Hello foo\"}", "<properties> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <configuration> <!-- This example configuration does not show a complete plug-in configuration --> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <hollow-jar>true</hollow-jar> </configuration> </plugin> </plugins>", "mvn clean package", "java -jar target/microprofile-config-bootable.jar --deployment=target/microprofile-config.war", "--deployment=<PATH_NAME>", "curl http://localhost:8080/microprofile-config/config/json", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>jaxrs-server</layer> <layer>microprofile-platform</layer> </layers> <cloud/> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "mvn package", "oc new-project bootable-jar-project", "mkdir target/openshift && cp target/microprofile-config-bootable.jar target/openshift 1 oc import-image ubi8/openjdk-11 --from=registry.redhat.io/ubi8/openjdk-11 --confirm 2 oc new-build --strategy source --binary --image-stream openjdk-11 --name microprofile-config-app 3 oc start-build microprofile-config-app --from-dir target/openshift 4", "oc get pods", "oc get is microprofile-config-app", "oc new-app microprofile-config-app oc expose svc/microprofile-config-app", "oc new-app <_IMAGESTREAM_> -e JAVA_OPTS_APPEND=\"-Xlog:gc*:file=/tmp/gc.log:time -Dwildfly.statistics-enabled=true\"", "curl http://USD(oc get route microprofile-config-app --template='{{ .spec.host }}')/config/json", "{\"result\":\"Hello jim\"}", "JAVA_OPTS_APPEND=\"-Xlog:gc*:file=/tmp/gc.log:time -Dwildfly.statistics-enabled=true\"", "mkdir scripts", "config.path=/etc/config config.ordinal=200", "config map /subsystem=microprofile-config-smallrye/config-source=os-map:add(dir={path=USD{config.path}}, ordinal=USD{config.ordinal})", "<cli-sessions> <cli-session> <properties-file> scripts/cli.properties </properties-file> <script-files> <script>scripts/mp-config.cli</script> </script-files> </cli-session> </cli-sessions>", "mvn package", "mkdir target/openshift", "cp target/microprofile-config-bootable.jar target/openshift", "oc start-build microprofile-config-app --from-dir target/openshift", "oc create configmap microprofile-config-map --from-literal=name=\"Name comes from Openshift ConfigMap\"", "oc set volume deployments/microprofile-config-app --add --name=config-volume --mount-path=/etc/config --type=configmap --configmap-name=microprofile-config-map", "curl http://USD(oc get route microprofile-config-app --template='{{ .spec.host }}')/config/json", "{\"result\":\"Hello Name comes from Openshift ConfigMap\"}", "mvn archetype:generate -DgroupId=GROUP_ID -DartifactId=ARTIFACT_ID -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false", "<repositories> <repository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss</id> <url>https://maven.repository.redhat.com/ga</url> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories>", "<dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-eap-jakartaee8</artifactId> <version>7.3.4.GA</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement>", "<dependency> <groupId>org.jboss.spec.javax.servlet</groupId> <artifactId>jboss-servlet-api_4.0_spec</artifactId> <scope>provided</scope> </dependency>", "mvn archetype:generate -DgroupId=com.example.logging -DartifactId=logging -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false cd logging", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<dependencies> <dependency> <groupId>org.jboss.logging</groupId> <artifactId>jboss-logging</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.spec.javax.ws.rs</groupId> <artifactId>jboss-jaxrs-api_2.1_spec</artifactId> <scope>provided</scope> </dependency> </dependencies>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</location> </feature-pack> </feature-packs> <layers> <layer>jaxrs-server</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "mkdir -p APPLICATION_ROOT/src/main/java/com/example/logging/", "package com.example.logging; import javax.ws.rs.ApplicationPath; import javax.ws.rs.core.Application; @ApplicationPath(\"/\") public class RestApplication extends Application { }", "package com.example.logging; import javax.ws.rs.Path; import javax.ws.rs.core.Response; import javax.ws.rs.GET; import javax.ws.rs.Produces; import org.jboss.logging.Logger; @Path(\"/hello\") public class HelloWorldEndpoint { private static Logger log = Logger.getLogger(HelloWorldEndpoint.class.getName()); @GET @Produces(\"text/plain\") public Response doGet() { log.debug(\"HelloWorldEndpoint.doGet called\"); return Response.ok(\"Hello from XP bootable jar!\").build(); } }", "/subsystem=logging/logger=com.example.logging:add(level=ALL) /subsystem=logging/json-formatter=json-formatter:add(exception-output-type=formatted, pretty-print=false, meta-data={version=\"1\"}, key-overrides={timestamp=\"@timestamp\"}) /subsystem=logging/console-handler=CONSOLE:write-attribute(name=level,value=ALL) /subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json-formatter)", "<cli-sessions> <cli-session> <script-files> <script>scripts/logging.cli</script> </script-files> </cli-session> </cli-sessions>", "mvn package", "mvn wildfly-jar:run", "<plugins> <plugin> ... <!-- You must evolve the existing configuration with the <cloud/> element --> <configuration > <cloud/> </configuration> </plugin> </plugins>", "mvn clean package", "oc new-project bootable-jar-project", "mkdir target/openshift && cp target/logging-bootable.jar target/openshift 1 oc import-image ubi8/openjdk-11 --from=registry.redhat.io/ubi8/openjdk-11 --confirm 2 oc new-build --strategy source --binary --image-stream openjdk-11 --name logging 3 oc start-build logging --from-dir target/openshift 4", "oc new-app logging oc expose svc/logging", "oc get route logging --template='{{ .spec.host }}'", "http://ROUTE_NAME/hello", "oc get pods", "oc logs APP_POD_NAME", "mvn archetype:generate -DgroupId=com.example.webclustering -DartifactId=web-clustering -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false cd web-clustering", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>datasources-web-server</layer> <layer>web-clustering</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"4.0\" xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd\"> <distributable/> </web-app>", "mkdir -p APPLICATION_ROOT /src/main/java/com/example/webclustering/", "package com.example.webclustering; import java.io.IOException; import java.io.PrintWriter; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @WebServlet(urlPatterns = {\"/clustering\"}) public class MyServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException { response.setContentType(\"text/html;charset=UTF-8\"); long t; User user = (User) request.getSession().getAttribute(\"user\"); if (user == null) { t = System.currentTimeMillis(); user = new User(t); request.getSession().setAttribute(\"user\", user); } try (PrintWriter out = response.getWriter()) { out.println(\"<!DOCTYPE html>\"); out.println(\"<html>\"); out.println(\"<head>\"); out.println(\"<title>Web clustering demo</title>\"); out.println(\"</head>\"); out.println(\"<body>\"); out.println(\"<h1>Session id \" + request.getSession().getId() + \"</h1>\"); out.println(\"<h1>User Created \" + user.getCreated() + \"</h1>\"); out.println(\"<h1>Host Name \" + System.getenv(\"HOSTNAME\") + \"</h1>\"); out.println(\"</body>\"); out.println(\"</html>\"); } } }", "package com.example.webclustering; import java.io.Serializable; public class User implements Serializable { private final long created; User(long created) { this.created = created; } public long getCreated() { return created; } }", "mvn package", "java -jar target/web-clustering-bootable.jar -Djboss.node.name=node1 java -jar target/web-clustering-bootable.jar -Djboss.node.name=node2 -Djboss.socket.binding.port-offset=10", "<plugins> <plugin> ... <!-- You must evolve the existing configuration with the <cloud/> element --> <configuration > <cloud/> </configuration> </plugin> </plugins>", "mvn clean package", "oc new-project bootable-jar-project", "oc policy add-role-to-user view system:serviceaccount:USD(oc project -q):default", "mkdir target/openshift && cp target/web-clustering-bootable.jar target/openshift 1 oc import-image ubi8/openjdk-11 --from=registry.redhat.io/ubi8/openjdk-11 --confirm 2 oc new-build --strategy source --binary --image-stream openjdk-11 --name web-clustering 3 oc start-build web-clustering --from-dir target/openshift 4", "oc new-app web-clustering -e KUBERNETES_NAMESPACE=USD(oc project -q) oc expose svc/web-clustering", "oc get route web-clustering --template='{{ .spec.host }}'", "http://ROUTE_NAME/clustering", "oc scale --replicas=2 deployments web-clustering", "oc get pods", "http://ROUTE_NAME/clustering", "mvn archetype:generate -DgroupId=com.example.auth -DartifactId=authentication -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false cd authentication", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-pack-location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</feature-pack-location> <layers> <layer>datasources-web-server</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"4.0\" xmlns=\"http://xmlns.jcp.org/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_4_0.xsd\"> <login-config> <auth-method>BASIC</auth-method> <realm-name>Example Realm</realm-name> </login-config> </web-app>", "mkdir -p APPLICATION_ROOT/src/main/java/com/example/authentication/", "package com.example.authentication; import javax.servlet.annotation.HttpMethodConstraint; import javax.servlet.annotation.ServletSecurity; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.io.PrintWriter; @WebServlet(urlPatterns = \"/hello\") @ServletSecurity(httpMethodConstraints = { @HttpMethodConstraint(value = \"GET\", rolesAllowed = { \"Users\" }) }) public class TestServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { PrintWriter writer = resp.getWriter(); writer.println(\"Hello \" + req.getUserPrincipal().getName()); writer.close(); } }", "/subsystem=elytron/properties-realm=bootable-realm:add(users-properties={relative-to=jboss.server.config.dir, path=bootable-users.properties, plain-text=true}, groups-properties={relative-to=jboss.server.config.dir, path=bootable-groups.properties}) /subsystem=elytron/security-domain=BootableDomain:add(default-realm=bootable-realm, permission-mapper=default-permission-mapper, realms=[{realm=bootable-realm, role-decoder=groups-to-roles}]) /subsystem=undertow/application-security-domain=other:write-attribute(name=security-domain, value=BootableDomain)", "<cli-sessions> <cli-session> <script-files> <script>scripts/authentication.cli</script> </script-files> </cli-session> </cli-sessions>", "mkdir -p APPLICATION_ROOT/extra-content/standalone/configuration/", "testuser=bootable_password", "testuser=Users", "<extra-server-content-dirs> <extra-content>extra-content</extra-content> </extra-server-content-dirs>", "mvn package", "mvn wildfly-jar:run", "curl -v http://localhost:8080/hello", "HTTP/1.1 401 Unauthorized WWW-Authenticate: Basic realm=\"Example Realm\"", "curl -v -u testuser:bootable_password http://localhost:8080/hello", "HTTP/1.1 200 OK . Hello testuser", "mvn archetype:generate -DgroupId=com.example.keycloak -DartifactId=simple-webapp -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false cd simple-webapp", "<properties> <bootable.jar.maven.plugin.version>4.0.3.Final-redhat-00001</bootable.jar.maven.plugin.version> <jboss.xp.galleon.feature.pack.version>3.0.0.GA-redhat-00001</jboss.xp.galleon.feature.pack.version> <keycloak.feature.pack.version>9.0.10.redhat-00001</keycloak.feature.pack.version> </properties>", "<plugins> <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-jar-maven-plugin</artifactId> <version>USD{bootable.jar.maven.plugin.version}</version> <configuration> <feature-packs> <feature-pack> <location>org.jboss.eap:wildfly-galleon-pack:USD{jboss.xp.galleon.feature.pack.version}</location> </feature-pack> <feature-pack> <location>org.jboss.sso:keycloak-adapter-galleon-pack:USD{keycloak.feature.pack.version}</location> </feature-pack> </feature-packs> <layers> <layer>datasources-web-server</layer> <layer>keycloak-client-oidc</layer> </layers> </configuration> <executions> <execution> <goals> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins>", "<configuration> <context-root>false</context-root> </configuration>", "/subsystem=keycloak/secure-deployment=simple-webapp.war:add( realm=demo, resource=simple-webapp, public-client=true, auth-server-url=http://localhost:8090/auth/, ssl-required=EXTERNAL)", "<cli-sessions> <cli-session> <script-files> <script>scripts/configure-oidc.cli</script> </script-files> </cli-session> </cli-sessions>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <web-app version=\"2.5\" xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd\" metadata-complete=\"false\"> <login-config> <auth-method>BASIC</auth-method> <realm-name>Simple Realm</realm-name> </login-config> </web-app>", "{ \"realm\" : \"demo\", \"resource\" : \"simple-webapp\", \"public-client\" : \"true\", \"auth-server-url\" : \"http://localhost:8090/auth/\", \"ssl-required\" : \"EXTERNAL\" }", "<login-config> <auth-method>KEYCLOAK</auth-method> <realm-name>Simple Realm</realm-name> </login-config>", "package com.example.securedservlet; import java.io.IOException; import java.io.PrintWriter; import java.security.Principal; import javax.servlet.ServletException; import javax.servlet.annotation.HttpMethodConstraint; import javax.servlet.annotation.ServletSecurity; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; @WebServlet(\"/secured\") @ServletSecurity(httpMethodConstraints = { @HttpMethodConstraint(value = \"GET\", rolesAllowed = { \"Users\" }) }) public class SecuredServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try (PrintWriter writer = resp.getWriter()) { writer.println(\"<html>\"); writer.println(\"<head><title>Secured Servlet</title></head>\"); writer.println(\"<body>\"); writer.println(\"<h1>Secured Servlet</h1>\"); writer.println(\"<p>\"); writer.print(\" Current Principal '\"); Principal user = req.getUserPrincipal(); writer.print(user != null ? user.getName() : \"NO AUTHENTICATED USER\"); writer.print(\"'\"); writer.println(\" </p>\"); writer.println(\" </body>\"); writer.println(\"</html>\"); } } }", "mvn package", "java -jar target/simple-webapp-bootable.jar", "http://localhost:8080/simple-webapp/secured", "Current Principal '<principal id>'", "mvn wildfly-jar:dev", "mvn package -Ddev", "mvn wildfly-jar:shutdown", "mvn package", "patch apply patch-oneoff1.zip --override-all patch apply patch-oneoff2.zip --override-all patch info --json-output" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/the-bootable-jar_default
Machine management
Machine management OpenShift Container Platform 4.7 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
[ "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a region: us-east-1 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 12 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-us-east-1a 13 tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: spotMarketOptions: {}", "providerSpec: placement: tenancy: dedicated", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 12 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 13 managedIdentity: <infrastructure_id>-identity 14 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 15 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 16 17 userDataSecret: name: worker-user-data 18 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 19 zone: \"1\" 20", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: spotVMOptions: {}", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 10 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 11 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network 12 subnetwork: <infrastructure_id>-worker-subnet 13 projectID: <project_name> 14 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com 15 16 scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker 17 userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "providerSpec: value: preemptible: true", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "providerSpec: value: # disks: - type: # encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"", "oc adm cordon <node_name> oc adm drain <node_name>", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A", "oc get machineset -o yaml", "oc delete machineset <machineset-name>", "oc get nodes", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a region: us-east-1 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-us-east-1a 14 tags: - name: kubernetes.io/cluster/<infrastructure_id> 15 value: owned userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 11 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 12 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network 13 subnetwork: <infrastructure_id>-worker-subnet 14 projectID: <project_name> 15 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com 16 17 scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker 18 userDataSecret: name: worker-user-data zone: us-central1-a", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\"", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\"", "oc create -f cluster-monitoring-configmap.yaml", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc edit ClusterLogging instance", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pod kibana-5b8bdf44f9-ccpq9 -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0", "oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml", "kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''", "apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s", "oc get pod kibana-7d85dcffc8-bfpfp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>", "oc get pods", "NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-7.9*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "---------------------------------------------------------------------------------------------------------- | DescribeImages | +---------------------------+----------------------------------------------------+-----------------------+ | 2020-05-13T09:50:36.000Z | RHEL-7.9_HVM_BETA-20200422-x86_64-0-Hourly2-GP2 | ami-038714142142a6a64 | | 2020-09-18T07:51:03.000Z | RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 | ami-005b7876121b7244d | | 2021-02-09T09:46:19.000Z | RHEL-7.9_HVM-20210208-x86_64-0-Hourly2-GP2 | ami-030e754805234517e | +---------------------------+----------------------------------------------------+-----------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-7.9*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "---------------------------------------------------------------------------------------------------------- | DescribeImages | +---------------------------+----------------------------------------------------+-----------------------+ | 2020-05-13T09:50:36.000Z | RHEL-7.9_HVM_BETA-20200422-x86_64-0-Hourly2-GP2 | ami-038714142142a6a64 | | 2020-09-18T07:51:03.000Z | RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 | ami-005b7876121b7244d | | 2021-02-09T09:46:19.000Z | RHEL-7.9_HVM-20210208-x86_64-0-Hourly2-GP2 | ami-030e754805234517e | +---------------------------+----------------------------------------------------+-----------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com [new_workers] mycluster-rhel7-2.example.com mycluster-rhel7-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "aws cloudformation describe-stacks --stack-name <name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/machine_management/index
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11
Chapter 2. Major differences between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 If you are migrating your Java applications from Red Hat build of OpenJDK 8, first ensure that you familiarize yourself with the changes that were introduced in Red Hat build of OpenJDK 11. These changes might require that you reconfigure your existing Red Hat build of OpenJDK installation before you migrate to Red Hat build of OpenJDK 21. Note This chapter is relevant only if you currently use Red Hat build of OpenJDK 8. You can ignore this chapter if you already use Red Hat build of OpenJDK 11 or later. One of the major differences between Red Hat build of OpenJDK 8 and later versions is the inclusion of a module system in Red Hat build of OpenJDK 11 or later. If you are migrating from Red Hat build of OpenJDK 8, consider moving your application's libraries and modules from the Red Hat build of OpenJDK 8 class path to the module class in Red Hat build of OpenJDK 11 or later. This change can improve the class-loading capabilities of your application. Red Hat build of OpenJDK 11 and later versions include new features and enhancements that can improve the performance of your application, such as enhanced memory usage, improved startup speed, and increased container integration. Note Some features might differ between Red Hat build of OpenJDK and other upstream community or third-party versions of OpenJDK. For example: The Shenandoah garbage collector is available in all versions of Red Hat build of OpenJDK, but this feature might not be available by default in other builds of OpenJDK. JDK Flight Recorder (JFR) support in OpenJDK 8 has been available from version 8u262 onward and enabled by default from version 8u272 onward, but JFR might be disabled in certain builds. Because JFR functionality was backported from the open source version of JFR in OpenJDK 11, the JFR implementation in Red Hat build of OpenJDK 8 is largely similar to JFR in Red Hat build of OpenJDK 11 or later. This JFR implementation is different from JFR in Oracle JDK 8, so users who want to migrate from Oracle JDK to Red Hat build of OpenJDK 8 or later need to be aware of the command-line options for using JFR. 32-bit builds of OpenJDK are generally unsupported in OpenJDK 8 or later, and they might not be available in later versions. 32-bit builds are unsupported in all versions of Red Hat build of OpenJDK. 2.1. Cryptography and security Certain minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11. However, both versions of Red Hat build of OpenJDK have many similar cryptography and security behaviors. Red Hat builds of OpenJDK use system-wide certificates, and each build obtains its list of disabled cryptographic algorithms from a system's global configuration settings. These settings are common to all versions of Red Hat build of OpenJDK, so you can easily change from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11 or later. In FIPS mode, Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 releases are self-configured, so that either release uses the same security providers at startup. The TLS stacks in Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11 are identical, because the SunJSSE engine from Red Hat build of OpenJDK 11 was backported to Red Hat build of OpenJDK 8. Both Red Hat build of OpenJDK versions support the TLS 1.3 protocol. The following minor cryptography and security differences exist between Red Hat build of OpenJDK 8 and Red Hat build of OpenJDK 11: Red Hat build of OpenJDK 8 Red Hat build of OpenJDK 11 TLS clients do not use TLSv1.3 for communication with the target server by default. You can change this behavior by setting the jdk.tls.client.protocols system property to ‐Djdk.tls.client.protocols=TLSv1.3 . TLS clients use TLSv.1.3 by default. This release does not support the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release supports the use of the X25519 and X448 elliptic curves in the Diffie-Hellman key exchange. This release still supports the legacy KRB5-based cipher suites, which are disabled for security reasons. You can enable these cipher suites by changing the jdk.tls.client.cipherSuites and jdk.tls.server.cipherSuites system properties. This release does not support the legacy KRB5-based cipher suites. This release does not support the Datagram Transport Layer Security (DTLS) protocol. This release supports the DTLS protocol. The max_fragment_length extension, which is used by DTLS, is not available for TLS clients. The max_fragment_length extension is available for both clients and servers. 2.2. Garbage collector For garbage collection, Red Hat build of OpenJDK 8 uses the Parallel collector by default, whereas Red Hat build of OpenJDK 11 uses the Garbage-First (G1) collector by default. Before you choose a garbage collector, consider the following details: If you want to improve throughput, use the Parallel collector. The Parallel collector maximizes throughput but ignores latency, which means that garbage collection pauses could become an issue if you want your application to have reasonable response times. However, if your application is performing batch processing and you are not concerned about pause times, the Parallel collector is the best choice. You can switch to the Parallel collector by setting the ‐XX:+UseParallelGC JVM option. If you want a balance between throughput and latency, use the G1 collector. The G1 collector can achieve great throughput while providing reasonable latencies with pause times of a few hundred milliseconds. If you notice throughput issues when migrating applications from Red Hat build of OpenJDK 8 to Red Hat build of OpenJDK 11, you can switch to the Parallel collector as described above. If you want low-latency garbage collection, use the Shenandoah collector. You can select the garbage collector type that you want to use by specifying the ‐XX:+<gc_type> JVM option at startup. For example, the ‐XX:+UseParallelGC option switches to the Parallel collector. 2.3. Garbage collector logging options Red Hat build of OpenJDK 11 includes a new and more powerful logging framework that works more effectively than the old logging framework. Red Hat build of OpenJDK 11 also includes unified JVM logging options and unified GC logging options. The logging system for Red Hat build of OpenJDK 11 activates the - XX:+PrintGCTimeStamps and -XX:+PrintGCDateStamps options by default. Because the logging format in Red Hat build of OpenJDK 11 is different from Red Hat build of OpenJDK 8, you might need to update any of your code that parses garbage collector logs. Modified options in Red Hat build of OpenJDK 11 The old logging framework options are deprecated in Red Hat build of OpenJDK 11. These old options are still available only as aliases for the new logging framework options. If you want to work more effectively with Red Hat build of OpenJDK 11 or later, use the new logging framework options. The following table outlines the changes in garbage collector logging options between Red Hat build of OpenJDK versions 8 and 11: Options in Red Hat build of OpenJDK 8 Options in Red Hat build of OpenJDK 11 -verbose:gc -Xlog:gc -XX:+PrintGC -Xlog:gc -XX:+PrintGCDetails -Xlog:gc* or -Xlog:gc+USDtags -Xloggc:USDFILE -Xlog:gc:file=USDFILE When using the -XX:+PrintGCDetails option, pass the -Xlog:gc* flag, where the asterisk ( * ) activates more detailed logging. Alternatively, you can pass the -Xlog:gc+USDtags flag. When using the -Xloggc option, append the :file=USDFILE suffix to redirect log output to the specified file. For example -Xlog:gc:file=USDFILE . Removed options in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 does not include the following options, which were deprecated in Red Hat build of OpenJDK 8: -Xincgc -XX:+CMSIncrementalMode -XX:+UseCMSCompactAtFullCollection -XX:+CMSFullGCsBeforeCompaction -XX:+UseCMSCollectionPassing Red Hat build of OpenJDK 11 also removes the following options because the printing of timestamps and datestamps is automatically enabled: -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps Note In Red Hat build of OpenJDK 11, unless you specify the -XX:+IgnoreUnrecognizedVMOptions option, the use of any of the preceding removed options results in a startup failure. Additional resources For more information about the common framework for unified JVM logging and the format of Xlog options, see JEP 158: Unified JVM Logging . For more information about deprecated and removed options, see JEP 214: Remove GC Combinations Deprecated in JDK 8 . For more information about unified GC logging, see JEP 271: Unified GC Logging . 2.4. OpenJDK graphics Before version 8u252, Red Hat build of OpenJDK 8 used Pisces as the default rendering engine. From version 8u252 onward, Red Hat build of OpenJDK 8 uses Marlin as the new default rendering engine. Red Hat build of OpenJDK 11 and later releases also use Marlin by default. Marlin improves the handling of intensive application graphics. Because the rendering engines produce the same results, users should not observe any changes apart from improved performance. 2.5. Webstart and applets You can use Java WebStart by using the IcedTea-Web plug-in with Red Hat build of OpenJDK 8 or Red Hat build of OpenJDK 11 on RHEL 7, RHEL 8, and Microsoft Windows operating systems. The IcedTea-Web plug-in requires that Red Hat build of OpenJDK 8 is installed as a dependency on the system. Applets are not supported on any version of Red Hat build of OpenJDK. Even though some applets can be run on RHEL 7 by using the IcedTea-web plug-in with OpenJDK 8 on a Netscape Plugin Application Programming Interface (NPAPI) browser, Red Hat build of OpenJDK does not support this behavior. Note The upstream community version of OpenJDK does not support applets or Java Webstart. Support for these technologies is deprecated and they are not recommended for use. 2.6. JPMS The Java Platform Module System (JPMS), which was introduced in OpenJDK 9, limits or prevents access to non-public APIs. JPMS also impacts how you can start and compile your Java application (for example, whether you use a class path or a module path). Internal modules By default, Red Hat build of OpenJDK 11 restricts but still permits access to JDK internal modules. This means that most applications can continue to work without requiring changes, but these applications will emit a warning. As a workaround for this restriction, you can enable your application to access an internal package by passing a ‐‐add-opens <module-name>/<package-in-module>=ALL-UNNAMED option to the java command. For example: Additionally, you can check illegal access cases by passing the ‐‐illegal-access=warn option to the java command. This option changes the default behavior of Red Hat build of OpenJDK. ClassLoader The JPMS refactoring changes the ClassLoader hierarchy in Red Hat build of OpenJDK 11. In Red Hat build of OpenJDK 11, the system class loader is no longer an instance of URLClassLoader . Existing application code that invokes ClassLoader::getSystemClassLoader and casts the result to a URLClassLoader in Red Hat build of OpenJDK 11 will result in a runtime exception. In Red Hat build of OpenJDK 8, when you create a class loader, you can pass null as the parent of this class loader instance. However, in Red Hat build of OpenJDK 11, applications that pass null as the parent of a class loader might prevent the class loader from locating platform classes. Red Hat build of OpenJDK 11 includes a new class loader that can control the loading of certain classes. This improves the way that a class loader can locate all of its required classes. In Red Hat build of OpenJDK 11, when you create a class loader instance, you can set the platform class loader as its parent by using the ClassLoader.getPlatformClassLoader() API. Additional resources For more information about JPMS, see JEP 261: Module System . 2.7. Extension and endorsed override mechanisms In Red Hat build of OpenJDK 11, both the extension mechanism, which supported optional packages, and the endorsed standards override mechanism are no longer available. These changes mean that any libraries that are added to the <JAVA_HOME>/lib/ext or <JAVA_HOME>/lib/endorsed directory are no longer used, and Red Hat build of OpenJDK 11 generates an error if these directories exist. Additional resources For more information about the removed mechanisms, see JEP 220: Modular Run-Time Images . 2.8. JFR functionality JDK Flight Recorder (JFR) support was backported to Red Hat build of OpenJDK 8 starting from version 8u262. JFR support was subsequently enabled by default from Red Hat build of OpenJDK 8u272 onward. Note The term backporting describes when Red Hat takes an update from a more recent version of upstream software and applies that update to an older version of the software that Red Hat distributes. Backported JFR features The JFR backport to Red Hat build of OpenJDK 8 included all of the following features: A large number of events that are also available in Red Hat build of OpenJDK 11 Command-line tools such as jfr and the Java diagnostic command ( jcmd ) that behave consistently across Red Hat build of OpenJDK versions 8 and 11 The Java Management Extensions (JMX) API that you can use to enable JFR by using the JMX beans interfaces either programmatically or through jcmd The jdk.jfr namespace Note The JFR APIs in the jdk.jfr namespace are not considered part of the Java specification in Red Hat build of OpenJDK 8, but these APIs are part of the Java specification in Red Hat build of OpenJDK 11. Because the JFR API is available in all supported Red Hat build of OpenJDK versions, applications that use JFR do not require any special configuration to use the JFR APIs in Red Hat build of OpenJDK 8 and later versions. JDK Mission Control, which is distributed separately, was also updated to be compatible with Red Hat build of OpenJDK 8. Applications that need to be compatible with other OpenJDK versions If your applications need to be compatible with any of the following OpenJDK versions, you might need to adapt these applications: OpenJDK versions earlier than 8u262 OpenJDK versions from other vendors that do not support JFR Oracle JDK To aid this effort, Red Hat has developed a special compatibility layer that provides an empty implementation of JFR, which behaves as if JFR was disabled at runtime. For more information about the JFR compatibility API, see openjdk8-jfr-compat . You can install the resulting .jar file in the jre/lib/ext directory of an OpenJDK 8 distribution. Some applications might need to be updated if these applications were filtering out OpenJDK 8 by checking only for the version number instead of querying the MBeans interface. 2.9. JRE and headless packages All Red Hat build of OpenJDK versions for RHEL platforms are separated into the following types of packages. The following list of package types is sorted in order of minimality, starting with the most minimal. Java Runtime Environment (JRE) headless Provides the library only without support for graphical user interface but supports offline rendering of images JRE Adds the necessary libraries to run for full graphical clients JDK Includes tooling and compilers Red Hat build of OpenJDK versions for Windows platforms do not support headless packages. However, the Red Hat build of OpenJDK packages for Windows platforms are also divided into JRE and JDK components, similar to the packages for RHEL platforms. Note The upstream community version of OpenJDK 11 or later does not separate packages in this way and instead provides one monolithic JDK installation. OpenJDK 9 introduced a modularised version of the JDK class libraries divided by their namespaces. From Red Hat build of OpenJDK 11 onward, these libraries are packaged into jmods modules. For more information, see Jmods . 2.10. Jmods OpenJDK 9 introduced jmods , which is a modularized version of the JDK class libraries, where each module groups classes from a set of related packages. You can use the jlink tool to create derivative runtimes that include only some subset of the modules that are needed to run selected applications. From Red Hat build of OpenJDK 11 onward, Red Hat build of OpenJDK versions for RHEL platforms place the jmods files into a separate RPM package that is not installed by default. If you want to create standalone OpenJDK images for your applications by using jlink , you must manually install the jmods package (for example, java-11-openjdk-jmods ). Note On RHEL platforms, OpenJDK is dynamically linked against system libraries, which means the resulting jlink images are not portable across different versions of RHEL or other systems. If you want to ensure portability, you must use the portable builds of Red Hat build of OpenJDK that are released through the Red Hat Customer Portal. For more information, see Installing Red Hat build of OpenJDK on RHEL by using an archive . 2.11. Deprecated and removed functionality in Red Hat build of OpenJDK 11 Red Hat build of OpenJDK 11 has either deprecated or removed some features that Red Hat build of OpenJDK 8 supports. CORBA Red Hat build of OpenJDK 11 does not support the following Common Object Request Broker Architecture (CORBA) tools: Idlj orbd servertool tnamesrv Logging framework Red Hat build of OpenJDK 11 does not support the following APIs: java.util.logging.LogManager.addPropertyChangeListener java.util.logging.LogManager.removePropertyChangeListener java.util.jar.Pack200.Packer.addPropertyChangeListener java.util.jar.Pack200.Packer.removePropertyChangeListener java.util.jar.Pack200.Unpacker.addPropertyChangeListener java.util.jar.Pack200.Unpacker.removePropertyChangeListener Java EE modules Red Hat build of OpenJDK 11 does not support the following APIs: java.activation java.corba java.se.ee (aggregator) java.transaction java.xml.bind java.xml.ws java.xml.ws.annotation java.awt.peer Red Hat build of OpenJDK 11 sets the java.awt.peer package as internal, which means that applications cannot automatically access this package by default. Because of this change, Red Hat build of OpenJDK 11 removed a number of classes and methods that refer to the peer API, such as the Component.getPeer method. The following list outlines the most common use cases for the peer API: Writing of new graphics ports Checking if a component can be displayed Checking if a component is either lightweight or backed by an operating system native UI component resource such as an Xlib XWindow From Java 1.1 onward, the Component.isDisplayable() method provides the functionality to check whether a component can be displayed. From Java 1.2 onward, the Component.isLightweight() method provides the functionality to check whether a component is lightweight. javax.security and java.lang APIs Red Hat build of OpenJDK 11 does not support the following APIs: javax.security.auth.Policy java.lang.Runtime.runFinalizersOnExit(boolean) java.lang.SecurityManager.checkAwtEventQueueAccess() java.lang.SecurityManager.checkMemberAccess(java.lang.Class,int) java.lang.SecurityManager.checkSystemClipboardAccess() java.lang.SecurityManager.checkTopLevelWindow(java.lang.Object) java.lang.System.runFinalizersOnExit(boolean) java.lang.Thread.destroy() java.lang.Thread.stop(java.lang.Throwable) Sun.misc The sun.misc package has always been considered internal and unsupported. In Red Hat build of OpenJDK 11, the following packages are deprecated or removed: sun.misc.BASE64Encoder sun.misc.BASE64Decoder sun.misc.Unsafe sun.reflect.Reflection Consider the following information: Red Hat build of OpenJDK 8 added the java.util.Base64 package as a replacement for the sun.misc.BASE64Encoder and sun.misc.BASE64Decoder APIs. You can use the java.util.Base64 package rather than these APIs, which have been removed from Red Hat build of OpenJDK 11. Red Hat build of OpenJDK 11 deprecates the sun.misc.Unsafe package, which is scheduled for removal. For more information about a new set of APIs that you can use as a replacement for sun.misc.Unsafe , see JEP 193 . Red Hat build of OpenJDK 11 removes the sun.reflect.Reflection package. For more information about new functionality for stack walking that replaces the sun.reflect.Reflection.getCallerClass method, see JEP 259 . Additional resources For more information about the removed Java EE modules and CORBA modules and potential replacements for these modules, see JEP 320: Remove the Java EE and CORBA Modules . 2.12. Additional resources (or steps) For more information about Red Hat build of OpenJDK 8 features, see JDK 8 Features . For more information about OpenJDK 9 features inherited by Red Hat build of OpenJDK 11, see JDK 9 . For more information about OpenJDK 10 features inherited by Red Hat build of OpenJDK 11, see JDK 10 . For more information about Red Hat build of OpenJDK 11 features, see JDK 11 . For more information about a list of all available JEPs, see JEP 0: JEP Index . For more information about the changes introduced in version 17, see Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17 . For more information about the changes introduced in version 21, see Major differences between Red Hat build of OpenJDK 17 and Red Hat build of OpenJDK 21 .
[ "--add-opens java.base/jdk.internal.math=ALL-UNNAMED" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/migrating_to_red_hat_build_of_openjdk_21_from_earlier_versions/differences_8_11
Machine management
Machine management OpenShift Container Platform 4.14 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/machine_management/index
2.2. Listing Fence Devices and Fence Device Options
2.2. Listing Fence Devices and Fence Device Options You can use the ccs command to print a list of available fence devices and to print a list of options for each available fence type. You can also use the ccs command to print a list of fence devices currently configured for your cluster. To print a list of fence devices currently available for your cluster, execute the following command: For example, the following command lists the fence devices available on the cluster node node-01 , showing sample output. To print a list of the options you can specify for a particular fence type, execute the following command: For example, the following command lists the fence options for the fence_wti fence agent. To print a list of fence devices currently configured for your cluster, execute the following command:
[ "ccs -h host --lsfenceopts", "ccs -h node-01 --lsfenceopts fence_apc - Fence agent for APC over telnet/ssh fence_apc_snmp - Fence agent for APC, Tripplite PDU over SNMP fence_bladecenter - Fence agent for IBM BladeCenter fence_bladecenter_snmp - Fence agent for IBM BladeCenter over SNMP fence_brocade - Fence agent for HP Brocade over telnet/ssh fence_cisco_mds - Fence agent for Cisco MDS fence_cisco_ucs - Fence agent for Cisco UCS fence_drac - fencing agent for Dell Remote Access Card fence_drac5 - Fence agent for Dell DRAC CMC/5 fence_eaton_snmp - Fence agent for Eaton over SNMP fence_egenera - I/O Fencing agent for the Egenera BladeFrame fence_emerson - Fence agent for Emerson over SNMP fence_eps - Fence agent for ePowerSwitch fence_hpblade - Fence agent for HP BladeSystem fence_ibmblade - Fence agent for IBM BladeCenter over SNMP fence_idrac - Fence agent for IPMI fence_ifmib - Fence agent for IF MIB fence_ilo - Fence agent for HP iLO fence_ilo2 - Fence agent for HP iLO fence_ilo3 - Fence agent for IPMI fence_ilo3_ssh - Fence agent for HP iLO over SSH fence_ilo4 - Fence agent for IPMI fence_ilo4_ssh - Fence agent for HP iLO over SSH fence_ilo_moonshot - Fence agent for HP Moonshot iLO fence_ilo_mp - Fence agent for HP iLO MP fence_ilo_ssh - Fence agent for HP iLO over SSH fence_imm - Fence agent for IPMI fence_intelmodular - Fence agent for Intel Modular fence_ipdu - Fence agent for iPDU over SNMP fence_ipmilan - Fence agent for IPMI fence_kdump - Fence agent for use with kdump fence_mpath - Fence agent for multipath persistent reservation fence_pcmk - Helper that presents a RHCS-style interface to stonith-ng for CMAN based clusters fence_rhevm - Fence agent for RHEV-M REST API fence_rsa - Fence agent for IBM RSA fence_rsb - I/O Fencing agent for Fujitsu-Siemens RSB fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches fence_sanlock - Fence agent for watchdog and shared storage fence_scsi - fence agent for SCSI-3 persistent reservations fence_tripplite_snmp - Fence agent for APC, Tripplite PDU over SNMP fence_virsh - Fence agent for virsh fence_virt - Fence agent for virtual machines fence_vmware - Fence agent for VMWare fence_vmware_soap - Fence agent for VMWare over SOAP API fence_wti - Fence agent for WTI fence_xvm - Fence agent for virtual machines ccs -h host-138 --lsfenceopts fence_apc - Fence agent for APC over telnet/ssh fence_apc_snmp - Fence agent for APC, Tripplite PDU over SNMP fence_bladecenter - Fence agent for IBM BladeCenter fence_bladecenter_snmp - Fence agent for IBM BladeCenter over SNMP fence_brocade - Fence agent for HP Brocade over telnet/ssh fence_cisco_mds - Fence agent for Cisco MDS fence_cisco_ucs - Fence agent for Cisco UCS fence_drac - fencing agent for Dell Remote Access Card fence_drac5 - Fence agent for Dell DRAC CMC/5 fence_eaton_snmp - Fence agent for Eaton over SNMP fence_egenera - I/O Fencing agent for the Egenera BladeFrame fence_emerson - Fence agent for Emerson over SNMP fence_eps - Fence agent for ePowerSwitch fence_hpblade - Fence agent for HP BladeSystem fence_ibmblade - Fence agent for IBM BladeCenter over SNMP fence_idrac - Fence agent for IPMI fence_ifmib - Fence agent for IF MIB fence_ilo - Fence agent for HP iLO fence_ilo2 - Fence agent for HP iLO fence_ilo3 - Fence agent for IPMI fence_ilo3_ssh - Fence agent for HP iLO over SSH fence_ilo4 - Fence agent for IPMI fence_ilo4_ssh - Fence agent for HP iLO over SSH fence_ilo_moonshot - Fence agent for HP Moonshot iLO fence_ilo_mp - Fence agent for HP iLO MP fence_ilo_ssh - Fence agent for HP iLO over SSH fence_imm - Fence agent for IPMI fence_intelmodular - Fence agent for Intel Modular fence_ipdu - Fence agent for iPDU over SNMP fence_ipmilan - Fence agent for IPMI fence_kdump - Fence agent for use with kdump fence_mpath - Fence agent for multipath persistent reservation fence_pcmk - Helper that presents a RHCS-style interface to stonith-ng for CMAN based clusters fence_rhevm - Fence agent for RHEV-M REST API fence_rsa - Fence agent for IBM RSA fence_rsb - I/O Fencing agent for Fujitsu-Siemens RSB fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches fence_sanlock - Fence agent for watchdog and shared storage fence_scsi - fence agent for SCSI-3 persistent reservations fence_tripplite_snmp - Fence agent for APC, Tripplite PDU over SNMP fence_virsh - Fence agent for virsh fence_virt - Fence agent for virtual machines fence_vmware - Fence agent for VMWare fence_vmware_soap - Fence agent for VMWare over SOAP API fence_wti - Fence agent for WTI fence_xvm - Fence agent for virtual machines", "ccs -h host --lsfenceopts fence_type", "ccs -h node-01 --lsfenceopts fence_wti fence_wti - Fence agent for WTI Required Options: Optional Options: option: No description available action: Fencing Action ipaddr: IP Address or Hostname login: Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection identity_file: Identity file for ssh port: Physical plug number or name of virtual machine inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on", "ccs -h host --lsfencedev" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-list-fence-devices-ccs-ca
Chapter 3. Downloading models
Chapter 3. Downloading models Red Hat Enterprise Linux AI allows you to customize or chat with various Large Language Models (LLMs) provided and built by Red Hat and IBM. You can download these models from the Red Hat RHEL AI registry. Table 3.1. Red Hat Enterprise Linux AI version 1.3 LLMs Large Language Models (LLMs) Type Size Purpose Model family NVIDIA Accelerator Support AMD Accelerator Support Intel Accelerator support granite-7b-starter Base model 12.6 GB Base model for customizing, training and fine-tuning Granite 2 Not available Not available Technology preview granite-7b-redhat-lab LAB fine-tuned Granite model 12.6 GB Granite model for inference serving Granite 2 Not available Not available Technology preview granite-8b-starter-v1 Base model 16.0 GB Base model for customizing, training and fine-tuning Granite 3 General availability Technology preview Not available granite-8b-lab-v1 LAB fine-tuned granite model 16.0 GB Granite model for inference serving Granite 3 General availability Technology preview Not available granite-8b-lab-v2-preview LAB fine-tuned granite model 16.0 GB Preview of the version 2 8b Granite model for inference serving Granite 3 Technology preview Technology preview Not available granite-8b-code-instruct LAB fine-tuned granite code model 15.0 GB LAB fine-tuned granite code model for inference serving Granite Code models Technology preview Technology preview Technology preview granite-8b-code-base Granite fine-tuned code model 15.0 GB Granite code model for inference serving Granite Code models Technology preview Technology preview Technology preview mixtral-8x7b-instruct-v0-1 Teacher/critic model 87.0 GB Teacher and critic model for running Synthetic data generation (SDG) Mixtral General availability Technology preview Technology preview prometheus-8x7b-v2-0 Evaluation judge model 87.0 GB Judge model for multi-phase training and evaluation Prometheus 2 General availability Technology preview Technology preview Important Using the `granite-8b-code-instruct` or `granite-8b-code-base` Large Language models (LLMS) as well as running RHEL AI with Intel and AMD accelerators is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Models required for customizing the Granite LLM The granite-7b-starter or granite-8b-starter-v1 base LLM depending on your hardware vendor. The mixtral-8x7b-instruct-v0-1 teacher model for SDG. The prometheus-8x7b-v2-0 judge model for training and evaluation. Additional tools required for customizing an LLM The Low-rank adaptation (LoRA) adaptors enhance the efficiency of the Synthetic Data Generation (SDG) process. The skills-adapter-v3 LoRA layered skills adapter for SDG. The knowledge-adapter-v3 LoRA layered knowledge adapter for SDG. Example command for downloading the adaptors Important The LoRA layered adapters do not show up in the output of the ilab model list command. You can see the skills-adapter-v3 and knowledge-adapter-v3 files in the ls ~/.cache/instructlab/models folder. 3.1. Downloading the models from a Red Hat repository You can download the additional optional models created by Red Hat and IBM. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You created a Red Hat registry account and logged in on your machine. You have root user access on your machine. Procedure To download the additional LLM models, run the following command: USD ilab model download --repository docker://<repository_and_model> --release <release> where: <repository_and_model> Specifies the repository location of the model as well as the model. You can access the models from the registry.redhat.io/rhelai1/ repository. <release> Specifies the version of the model. Set to 1.3 for the models that are supported on RHEL AI version 1.3. Set to latest for the latest version of the model. Example command USD ilab model download --repository docker://registry.redhat.io/rhelai1/granite-8b-starter-v1 --release latest Verification You can view all the downloaded models, including the new models after training, on your system with the following command: USD ilab model list Example output You can also list the downloaded models in the ls ~/.cache/instructlab/models folder by running the following command: USD ls ~/.cache/instructlab/models Example output granite-8b-starter-v1 granite-8b-lab-v1
[ "ilab model download --repository docker://registry.redhat.io/rhelai1/knowledge-adapter-v3 --release latest", "ilab model download --repository docker://<repository_and_model> --release <release>", "ilab model download --repository docker://registry.redhat.io/rhelai1/granite-8b-starter-v1 --release latest", "ilab model list", "+-----------------------------------+---------------------+---------+ | Model Name | Last Modified | Size | +-----------------------------------+---------------------+---------+ | models/prometheus-8x7b-v2-0 | 2024-08-09 13:28:50 | 87.0 GB| | models/mixtral-8x7b-instruct-v0-1 | 2024-08-09 13:28:24 | 87.0 GB| | models/granite-8b-starter-v1 | 2024-08-09 14:28:40 | 16.6 GB| | models/granite-8b-lab-v1 | 2024-08-09 14:40:35 | 16.6 GB| +-----------------------------------+---------------------+---------+", "ls ~/.cache/instructlab/models", "granite-8b-starter-v1 granite-8b-lab-v1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/building_your_rhel_ai_environment/downloading_ad_models
2.6.2.2. Option Fields
2.6.2.2. Option Fields In addition to basic rules that allow and deny access, the Red Hat Enterprise Linux implementation of TCP Wrappers supports extensions to the access control language through option fields . By using option fields in hosts access rules, administrators can accomplish a variety of tasks such as altering log behavior, consolidating access control, and launching shell commands. 2.6.2.2.1. Logging Option fields let administrators easily change the log facility and priority level for a rule by using the severity directive. In the following example, connections to the SSH daemon from any host in the example.com domain are logged to the default authpriv syslog facility (because no facility value is specified) with a priority of emerg : It is also possible to specify a facility using the severity option. The following example logs any SSH connection attempts by hosts from the example.com domain to the local0 facility with a priority of alert : Note In practice, this example does not work until the syslog daemon ( syslogd ) is configured to log to the local0 facility. Refer to the syslog.conf man page for information about configuring custom log facilities.
[ "sshd : .example.com : severity emerg", "sshd : .example.com : severity local0.alert" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-tcp_wrappers_configuration_files-option_fields
Appendix D. Swift request headers
Appendix D. Swift request headers Table D.1. Request Headers Name Description Type Required X-Auth-User The key Ceph Object Gateway username to authenticate. String Yes X-Auth-Key The key associated to a Ceph Object Gateway username. String Yes
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/developer_guide/swift-request-headers_dev
Chapter 9. Configuring outgoing HTTP requests
Chapter 9. Configuring outgoing HTTP requests Red Hat build of Keycloak often needs to make requests to the applications and services that it secures. Red Hat build of Keycloak manages these outgoing connections using an HTTP client. This chapter shows how to configure the client, connection pool, proxy environment settings, timeouts, and more. 9.1. Client Configuration Command The HTTP client that Red Hat build of Keycloak uses for outgoing communication is highly configurable. To configure the Red Hat build of Keycloak outgoing HTTP client, enter this command: bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value> The following are the command options: establish-connection-timeout-millis Maximum time in milliseconds until establishing a connection times out. Default: Not set. socket-timeout-millis Maximum time of inactivity between two data packets until a socket connection times out, in milliseconds. Default: 5000ms connection-pool-size Size of the connection pool for outgoing connections. Default: 128. max-pooled-per-route How many connections can be pooled per host. Default: 64. connection-ttl-millis Maximum connection time to live in milliseconds. Default: Not set. max-connection-idle-time-millis Maximum time an idle connection stays in the connection pool, in milliseconds. Idle connections will be removed from the pool by a background cleaner thread. Set this option to -1 to disable this check. Default: 900000. disable-cookies Enable or disable caching of cookies. Default: true. client-keystore File path to a Java keystore file. This keystore contains client certificates for two-way SSL. client-keystore-password Password for the client keystore. REQUIRED, when client-keystore is set. client-key-password Password for the private key of the client. REQUIRED, when client-keystore is set. proxy-mappings Specify proxy configurations for outgoing HTTP requests. For more details, see Section 9.2, "Proxy mappings for outgoing HTTP requests" . disable-trust-manager If an outgoing request requires HTTPS and this configuration option is set to true, you do not have to specify a truststore. This setting should be used only during development and never in production because it will disable verification of SSL certificates. Default: false. 9.2. Proxy mappings for outgoing HTTP requests To configure outgoing requests to use a proxy, you can use the following standard proxy environment variables to configure the proxy mappings: HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . The HTTP_PROXY and HTTPS_PROXY variables represent the proxy server that is used for outgoing HTTP requests. Red Hat build of Keycloak does not differentiate between the two variables. If you define both variables, HTTPS_PROXY takes precedence regardless of the actual scheme that the proxy server uses. The NO_PROXY variable defines a comma separated list of hostnames that should not use the proxy. For each hostname that you specify, all its subdomains are also excluded from using proxy. The environment variables can be lowercase or uppercase. Lowercase takes precedence. For example, if you define both HTTP_PROXY and http_proxy , http_proxy is used. Example of proxy mappings and environment variables In this example, the following results occur: All outgoing requests use the proxy https://www-proxy.acme.com:8080 except for requests to google.com or any subdomain of google.com, such as auth.google.com. login.facebook.com and all its subdomains do not use the defined proxy, but groups.facebook.com uses the proxy because it is not a subdomain of login.facebook.com. 9.3. Proxy mappings using regular expressions An alternative to using environment variables for proxy mappings is to configure a comma-delimited list of proxy-mappings for outgoing requests sent by Red Hat build of Keycloak. A proxy-mapping consists of a regex-based hostname pattern and a proxy-uri, using the format hostname-pattern;proxy-uri . For example, consider the following regex: You apply a regex-based hostname pattern by entering this command: bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings="'*\\\.(google|googleapis)\\\.com;http://www-proxy.acme.com:8080'" To determine the proxy for the outgoing HTTP request, the following occurs: The target hostname is matched against all configured hostname patterns. The proxy-uri of the first matching pattern is used. If no configured pattern matches the hostname, no proxy is used. When your proxy server requires authentication, include the credentials of the proxy user in the format username:password@ . For example: Example of regular expressions for proxy-mapping: In this example, the following occurs: The special value NO_PROXY for the proxy-uri is used, which means that no proxy is used for hosts matching the associated hostname pattern. A catch-all pattern ends the proxy-mappings, providing a default proxy for all outgoing requests. 9.4. Configuring trusted certificates for TLS connections See Configuring trusted certificates for outgoing requests for how to configure a Red Hat build of Keycloak Truststore so that Red Hat build of Keycloak is able to perform outgoing requests using TLS.
[ "bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>", "HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com", ".*\\.(google|googleapis)\\.com", "bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings=\"'*\\\\\\.(google|googleapis)\\\\\\.com;http://www-proxy.acme.com:8080'\"", ".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080", "All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/outgoinghttp-
Chapter 35. KafkaClusterTemplate schema reference
Chapter 35. KafkaClusterTemplate schema reference Used in: KafkaClusterSpec Property Property type Description statefulset StatefulSetTemplate The statefulset property has been deprecated. Support for StatefulSets was removed in Streams for Apache Kafka 2.5. This property is ignored. Template for Kafka StatefulSet . pod PodTemplate Template for Kafka Pods . bootstrapService InternalServiceTemplate Template for Kafka bootstrap Service . brokersService InternalServiceTemplate Template for Kafka broker Service . externalBootstrapService ResourceTemplate Template for Kafka external bootstrap Service . perPodService ResourceTemplate Template for Kafka per-pod Services used for access from outside of OpenShift. externalBootstrapRoute ResourceTemplate Template for Kafka external bootstrap Route . perPodRoute ResourceTemplate Template for Kafka per-pod Routes used for access from outside of OpenShift. externalBootstrapIngress ResourceTemplate Template for Kafka external bootstrap Ingress . perPodIngress ResourceTemplate Template for Kafka per-pod Ingress used for access from outside of OpenShift. persistentVolumeClaim ResourceTemplate Template for all Kafka PersistentVolumeClaims . podDisruptionBudget PodDisruptionBudgetTemplate Template for Kafka PodDisruptionBudget . kafkaContainer ContainerTemplate Template for the Kafka broker container. initContainer ContainerTemplate Template for the Kafka init container. clusterCaCert ResourceTemplate Template for Secret with Kafka Cluster certificate public key. serviceAccount ResourceTemplate Template for the Kafka service account. jmxSecret ResourceTemplate Template for Secret of the Kafka Cluster JMX authentication. clusterRoleBinding ResourceTemplate Template for the Kafka ClusterRoleBinding. podSet ResourceTemplate Template for Kafka StrimziPodSet resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaclustertemplate-reference
Chapter 10. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates
Chapter 10. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates In OpenShift Container Platform version 4.14, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . Note Be sure to also review this site list if you are configuring a proxy. 10.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 10.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 10.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 10.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 10.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 10.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 10.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 10.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 10.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 10.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 10.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 10.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 10.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 10.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 10.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 10.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 10.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 10.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 10.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 10.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 10.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 10.10. Required IAM permissions for installation iam.roles.get Example 10.11. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 10.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 10.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 10.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 10.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 10.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 10.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 10.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 10.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 10.20. Required Images permissions for deletion compute.images.delete compute.images.list Example 10.21. Required permissions to get Region related information compute.regions.get Example 10.22. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 10.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 10.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 10.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 10.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 10.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 10.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 10.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 10.23. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 10.5.4. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 10.24. Machine series for 64-bit ARM machines Tau T2A 10.5.5. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 10.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 10.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 10.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on GCP". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 10.6.3. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 10.6.4. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 10.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 10.7. Exporting common variables 10.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 10.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 10.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 10.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 10.25. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 10.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 10.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 10.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 10.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 10.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 10.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 10.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 10.26. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 10.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 10.27. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 10.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 10.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 10.28. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 10.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 10.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 10.29. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 10.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 10.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 10.30. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 10.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 10.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 10.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 10.31. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 10.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 10.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 10.32. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 10.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 10.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 10.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 10.33. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 10.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 10.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 10.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 10.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Configure Global Access for an Ingress Controller on GCP .
[ "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_gcp/installing-gcp-user-infra
Chapter 3. Device Drivers
Chapter 3. Device Drivers This chapter provides a comprehensive listing of all device drivers which were updated in Red Hat Enterprise Linux 6.7. Storage Drivers The hpsa driver has been upgraded to version 3.4.4-1-RH4. The lpfc driver has been upgraded to version 10.6.0.20. The megaraid_sas driver has been upgraded to version 06.806.08.00-rh3. The mpt2sas driver has been upgraded to version 20.101.00.00. The mpt3sas driver has been upgraded to version 04.100.00.00-rh. The Multiple Devices (MD) drivers have been upgraded to the latest upstream version. The Nonvolatile Memory Express (NVMe) driver has been upgraded to version 0.10. The qla4xxx driver has been upgraded to version 5.03.00.00.06.07-k0. The qla2xxx driver has been upgraded to version 8.07.00.16.06.7-k. Network Drivers The be2net driver has been upgraded to version 10.4r. The cnic driver has been upgraded to version 2.5.20. The bonding driver has been upgraded to version 3.7.1. The forcedeth driver has been upgraded to the latest upstream version. The i40e driver has been upgraded to version 1.2.9-k. The qlcnic driver has been upgraded to version 5.3.62.1. The r8169 driver has been upgraded to version 2.3LK-NAPI. Miscellaneous Drivers The drm driver has been upgraded to the latest upstream version. The scsi_debug driver has been updated to version 1.82.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/ch-device_drivers
Chapter 10. Configuring Attribute Encryption
Chapter 10. Configuring Attribute Encryption The Directory Server offers a number of mechanisms to secure access to sensitive data, such as access control rules to prevent unauthorized users from reading certain entries or attributes within entries and TLS to protect data from eavesdropping and tampering on untrusted networks. However, if a copy of the server's database files should fall into the hands of an unauthorized person, they could potentially extract sensitive information from those files. Because information in a database is stored in plain text, some sensitive information, such as government identification numbers or passwords, may not be protected enough by standard access control measures. For highly sensitive information, this potential for information loss could present a significant security risk. In order to remove that security risk, Directory Server allows portions of its database to be encrypted. Once encrypted, the data are safe even in the event that an attacker has a copy of the server's database files. Database encryption allows attributes to be encrypted in the database. Both encryption and the encryption cipher are configurable per attribute per back end. When configured, every instance of a particular attribute, even index data, is encrypted for every entry stored in that database. An additional benefit of attribute encryption is, that encrypted values can only be sent to a clients with a Security Strength Factor (SSF) greater than 1. Note There is one exception to encrypted data: any value which is used as the RDN for an entry is not encrypted within the entry DN. For example, if the uid attribute is encrypted, the value is encrypted in the entry but is displayed in the DN: That would allow someone to discover the encrypted value. Any attribute used within the entry DN cannot be effectively encrypted, since it will always be displayed in the DN. Be aware of what attributes are used to build the DN and design the attribute encryption model accordingly. Indexed attributes may be encrypted, and attribute encryption is fully compatible with eq and pres indexing. The contents of the index files that are normally derived from attribute values are also encrypted to prevent an attacker from recovering part or all of the encrypted data from an analysis of the indexes. Since the server pre-encrypts all index keys before looking up an index for an encrypted attribute, there is some effect on server performance for searches that make use of an encrypted index, but the effect is not serious enough that it is no longer worthwhile to use an index. 10.1. Encryption Keys In order to use attribute encryption, the server must be configured for TLS and have TLS enabled because attribute encryption uses the server's TLS encryption key and the same PIN input methods as TLS. The PIN must either be entered manually upon server startup or a PIN file must be used. Randomly generated symmetric cipher keys are used to encrypt and decrypt attribute data. A separate key is used for each configured cipher. These keys are wrapped using the public key from the server's TLS certificate, and the resulting wrapped key is stored within the server's configuration files. The effective strength of the attribute encryption is never higher than the strength of the server's TLS key used for wrapping. Without access to the server's private key, it is not possible to recover the symmetric keys from the wrapped copies. Warning There is no mechanism for recovering a lost key. Therefore, it is especially important to back up the server's certificate database safely. If the server's certificate were lost, it would not be possible to decrypt any encrypted data stored in its database. Warning If the TLS certificate is expiring and needs to be renewed, export the encrypted back end instance before the renewal. Update the certificate, then reimport the exported LDIF file.
[ "dn: uid=jsmith1234 ,ou=People,dc=example,dc=com uid:: Sf04P9nJWGU1qiW9JJCGRg==" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Creating_and_Maintaining_Databases-Database_Encryption