title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 5. Installing the Ansible Automation Platform Operator | Chapter 5. Installing the Ansible Automation Platform Operator When installing the Ansible Automation Platform operator the preferred method of deployment is to install the cluster-scoped operator on a targeted namespace with manual update approval. The main advantage of this deployment method is it impacts only resources within the targeted namespace(s) that can in turn provide flexibility when wanting to limit the scope of the AAP operator in how it is handled over different namespaces. For instance, if you intend to have separate devel and prod namespaces to manage your different AAP deployments while testing upgrades. The steps to deploy the Ansible Automation Platform operator are as follows. Log in to the Red Hat OpenShift web console using your cluster credentials. In the left-hand navigation menu, select Operators OperatorHub. Search for Ansible Automation Platform and select it. On the Ansible Automation Platform Install page, select "Install". On the "Install Operator" page, select the appropriate update channel, stable-2.3-cluster-scoped select the appropriate installation mode, A specific namespace on the cluster select the appropriate installed namespace, Operator recommended Namespace: aap select the appropriate update approval, e.g. Manual . Click Install . Click Approve on the Manual approval required. The process to install the Ansible Automation Platform may take a few minutes prior to being available. Once the installation is complete, select the View Operator button to view the installed operator in the namespace specified during the installation (e.g. aap ). Note This AAP operator deployment only targets the namespace aap . If additional namespaces are to be targeted (managed) by the AAP operator, one must add them to the OperatorGroup spec file. Details Appendix F, Adding additional managed namespaces to the AAP Operator . Note The default resource values for the Ansible Automation Platform operator are suitable for typical installations. However, if deploying a large number of automation controller and automation hub environments, it is recommended to increase the resource threshold for the Ansible Automation Platform operator within the subscription spec using subscription.spec.config.resources . This ensures that the operator has sufficient resources to handle the increased workload and prevent performance issues. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/install_operator |
3.6. Linux bridge | 3.6. Linux bridge A Linux bridge is a software device that uses packet forwarding in a packet-switched network. Bridging allows multiple network interface devices to share the connectivity of one NIC and appear on a network as separate physical devices. The bridge examines a packet's source addresses to determine relevant target addresses. Once the target address is determined, the bridge adds the location to a table for future reference. This allows a host to redirect network traffic to virtual machine associated vNICs that are members of a bridge. Custom properties can be defined for both the bridge and the Ethernet connection. VDSM passes the network definition and custom properties to the setup network hook script. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/bridge |
Chapter 1. Overview of Networking Topics | Chapter 1. Overview of Networking Topics 1.1. Comparing IP to non-IP Networks Network is a system of interconnected devices that can communicate sharing information and resources, such as files, printers, applications, and Internet connection. Each of these devices has a unique Internet Protocol (IP) address to send and receive messages between two or more devices using a set of rules called protocol. Categories of Network Communication IP Networks Networks that communicate through Internet Protocol addresses. An IP network is implemented in the Internet and most internal networks. Ethernet, Cable Modems, DSL Modems, dial up modems, wireless networks, and VPN connections are typical examples. non-IP Networks Networks that are used to communicate through a lower layer rather than the transport layer. Note that these networks are rarely used. InfiniBand is a non-IP network, described in Chapter 13, Configure InfiniBand and RDMA Networks . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/overview_of_networking_topics |
16.4. Configuring a Domain Blacklist in Squid | 16.4. Configuring a Domain Blacklist in Squid Frequently, administrators want to block access to specific domains. This section describes how to configure a domain blacklist in Squid. Prerequisites Squid is configured, and users can use the proxy. Procedure Edit the /etc/squid/squid.conf file and add the following settings: Important Add these entries before the first http_access allow statement that allows access to users or clients. Create the /etc/squid/domain_blacklist.txt file and add the domains you want to block. For example, to block access to example.com including subdomains and to block example.net , add: Important If you referred to the /etc/squid/domain_blacklist.txt file in the squid configuration, this file must not be empty. If the file is empty, Squid fails to start. Restart the squid service: | [
"acl domain_blacklist dstdomain \"/etc/squid/domain_blacklist.txt\" http_access deny all domain_blacklist",
".example.com example.net",
"systemctl restart squid"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/configuring-a-domain-blacklist-in-squid |
9.2.5. Viewing the Transaction Log | 9.2.5. Viewing the Transaction Log PackageKit maintains a log of the transactions that it performs. To view the log, from the Add/Remove Software window, click System Software log , or run the gpk-log command at the shell prompt. The Software Log Viewer shows the following information: Date - the date on which the transaction was performed. Action - the action that was performed during the transaction, for example Updated packages or Installed packages . Details - the transaction type such as Updated , Installed , or Removed , followed by a list of affected packages. Username - the name of the user who performed the action. Application - the front end application that was used to perform the action, for example Update System . Typing the name of a package in the top text entry field filters the list of transactions to those which affected that package. Figure 9.10. Viewing the log of package management transactions with the Software Log Viewer | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-viewing_the_transaction_log |
Chapter 9. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases | Chapter 9. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases To enhance the security of your operating system, use the UEFI Secure Boot feature for signature verification when booting a Red Hat Enterprise Linux Beta release on systems having UEFI Secure Boot enabled. 9.1. UEFI Secure Boot and RHEL Beta releases UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key. UEFI Secure Boot then verifies the signature using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific private key. UEFI Secure Boot attempts to verify the signature using the corresponding public key, but because the hardware does not recognize the Beta private key, Red Hat Enterprise Linux Beta release system fails to boot. Therefore, to use UEFI Secure Boot with a Beta release, add the Red Hat Beta public key to your system using the Machine Owner Key (MOK) facility. 9.2. Adding a Beta public key for UEFI Secure Boot This section contains information about how to add a Red Hat Enterprise Linux Beta public key for UEFI Secure Boot. Prerequisites The UEFI Secure Boot is disabled on the system. The Red Hat Enterprise Linux Beta release is installed, and Secure Boot is disabled even after system reboot. You are logged in to the system, and the tasks in the Initial Setup window are complete. Procedure Begin to enroll the Red Hat Beta public key in the system's Machine Owner Key (MOK) list: USD(uname -r) is replaced by the kernel version - for example, 4.18.0-80.el8.x86_64 . Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Enroll MOK . Select Continue . Select Yes and enter the password. The key is imported into the system's firmware. Select Reboot . Enable Secure Boot on the system. 9.3. Removing a Beta public key If you plan to remove the Red Hat Enterprise Linux Beta release, and install a Red Hat Enterprise Linux General Availability (GA) release, or a different operating system, then remove the Beta public key. The procedure describes how to remove a Beta public key. Procedure Begin to remove the Red Hat Beta public key from the system's Machine Owner Key (MOK) list: Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Reset MOK . Select Continue . Select Yes and enter the password that you had specified in step 2. The key is removed from the system's firmware. Select Reboot . | [
"mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer",
"mokutil --reset"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/booting-a-beta-system-with-uefi-secure-boot_rhel-installer |
Multicluster global hub | Multicluster global hub Red Hat Advanced Cluster Management for Kubernetes 2.11 Multicluster global hub | [
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: myregistry.example.com:5000/mirror/oc-mirror-metadata mirror: platform: channels: - name: stable-4.x type: ocp operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.x packages: - name: multicluster-global-hub-operator-rh - name: amq-streams additionalImages: [] helm: {}",
"mirror --config=./imageset-config.yaml docker://myregistry.example.com:5000",
"patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-mirror-catalog-source namespace: openshift-marketplace spec: image: myregistry.example.com:5000/mirror/my-operator-index:v4.x sourceType: grpc secrets: - <global-hub-secret>",
"-n openshift-marketplace get packagemanifests",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: labels: operators.openshift.org/catalog: \"true\" name: global-hub-operator-icsp spec: repositoryDigestMirrors: - mirrors: - myregistry.example.com:5000/multicluster-globalhub source: registry.redhat.io/multicluster-globalhub - mirrors: - myregistry.example.com:5000/openshift4 source: registry.redhat.io/openshift4 - mirrors: - myregistry.example.com:5000/redhat source: registry.redhat.io/redhat",
"export USER=<the-registry-user>",
"export PASSWORD=<the-registry-password>",
"get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull_secret.yaml",
"registry login --registry=USD{REGISTRY} --auth-basic=\"USDUSER:USDPASSWORD\" --to=pull_secret.yaml",
"set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull_secret.yaml",
"rm pull_secret.yaml",
"create secret generic <secret_name> -n <tenant_namespace> --from-file=.dockerconfigjson=<path/to/registry/credentials> --type=kubernetes.io/dockerconfigjson",
"secrets link <operator_sa> -n <tenant_namespace> <secret_name> --for=pull",
"get pods -n multicluster-global-hub NAME READY STATUS RESTARTS AGE multicluster-global-hub-operator-687584cb7c-fnftj 1/1 Running 0 2m12s",
"create secret generic multicluster-global-hub-transport -n multicluster-global-hub --from-literal=bootstrap_server=<kafka-bootstrap-server-address> --from-file=ca.crt=<CA-cert-for-kafka-server> --from-file=client.crt=<Client-cert-for-kafka-server> --from-file=client.key=<Client-key-for-kafka-server>",
"create secret generic multicluster-global-hub-storage -n multicluster-global-hub --from-literal=database_uri=<postgresql-uri> --from-literal=database_uri_with_readonlyuser=<postgresql-uri-with-readonlyuser> --from-file=ca.crt=<CA-for-postgres-server>",
"get secret multicluster-global-hub-grafana-datasources -n multicluster-global-hub -ojsonpath='{.data.datasources\\.yaml}' | base64 -d",
"apiVersion: 1 datasources: - access: proxy isDefault: true name: Global-Hub-DataSource type: postgres url: postgres-primary.multicluster-global-hub.svc:5432 database: hoh user: guest jsonData: sslmode: verify-ca tlsAuth: true tlsAuthWithCACert: true tlsConfigurationMethod: file-content tlsSkipVerify: true queryTimeout: 300s timeInterval: 30s secureJsonData: password: xxxxx tlsCACert: xxxxx",
"service: type: LoadBalancer",
"get svc postgres-ha -n multicluster-global-hub NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE postgres-ha LoadBalancer 172.30.227.58 xxxx.us-east-1.elb.amazonaws.com 5432:31442/TCP 128m",
"get managedclusteraddon multicluster-global-hub-controller -n USD<managed_hub_cluster_name>",
"get route multicluster-global-hub-grafana -n <the-namespace-of-multicluster-global-hub-instance>",
"deleteRules: - orgId: 1 uid: globalhub_suspicious_policy_change - orgId: 1 uid: globalhub_cluster_compliance_status_change_frequently - orgId: 1 uid: globalhub_high_number_of_policy_events - orgId: 1 uid: globalhub_data_retention_job - orgId: 1 uid: globalhub_local_compliance_job",
"apiVersion: v1 kind: Secret metadata: name: multicluster-global-hub-custom-grafana-config namespace: multicluster-global-hub type: Opaque stringData: grafana.ini: | [smtp] enabled = true host = smtp.google.com:465 user = <[email protected]> password = <password> ;cert_file = ;key_file = skip_verify = true from_address = <[email protected]> from_name = Grafana ;ehlo_identity = dashboard.example.com 1",
"apiVersion: v1 data: alerting.yaml: | contactPoints: - orgId: 1 name: globalhub_policy receivers: - uid: globalhub_policy_alert_email type: email settings: addresses: <[email protected]> singleEmail: false - uid: globalhub_policy_alert_slack type: slack settings: url: <Slack-webhook-URL> title: | {{ template \"globalhub.policy.title\" . }} text: | {{ template \"globalhub.policy.message\" . }} policies: - orgId: 1 receiver: globalhub_policy group_by: ['grafana_folder', 'alertname'] matchers: - grafana_folder = Policy repeat_interval: 1d deleteRules: - orgId: 1 uid: [Alert Rule Uid] muteTimes: - orgId: 1 name: mti_1 time_intervals: - times: - start_time: '06:00' end_time: '23:59' location: 'UTC' weekdays: ['monday:wednesday', 'saturday', 'sunday'] months: ['1:3', 'may:august', 'december'] years: ['2020:2022', '2030'] days_of_month: ['1:5', '-3:-1'] kind: ConfigMap metadata: name: multicluster-global-hub-custom-alerting namespace: multicluster-global-hub",
"exec -it multicluster-global-hub-postgres-0 -n multicluster-global-hub -- psql -d hoh",
"-- call the func to generate the initial data of '2023-07-06' by inheriting '2023-07-05' CALL history.generate_local_compliance('2024-07-06');",
"annotate search search-v2-operator -n open-cluster-management global-search-preview=true",
"status: conditions: - lastTransitionTime: '2024-05-31T19:49:37Z' message: None reason: None status: 'True' type: GlobalSearchReady"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html-single/multicluster_global_hub/index |
Chapter 17. Enabling kdump | Chapter 17. Enabling kdump For your RHEL 8 systems, you can configure enabling or disabling the kdump functionality on a specific kernel or on all installed kernels. However, you must routinely test the kdump functionality and validate its working status. 17.1. Enabling kdump for all installed kernels The kdump service starts by enabling kdump.service after the kexec tool is installed. You can enable and start the kdump service for all kernels installed on the machine. Prerequisites You have administrator privileges. Procedure Add the crashkernel= command-line parameter to all installed kernels: xxM is the required memory in megabytes. Enable the kdump service: Verification Check that the kdump service is running: 17.2. Enabling kdump for a specific installed kernel You can enable the kdump service for a specific kernel on the machine. Prerequisites You have administrator privileges. Procedure List the kernels installed on the machine. Add a specific kdump kernel to the system's Grand Unified Bootloader (GRUB) configuration. For example: xxM is the required memory reserve in megabytes. Enable the kdump service. Verification Check that the kdump service is running. 17.3. Disabling the kdump service You can stop the kdump.service and disable the service from starting on your RHEL 8 systems. Prerequisites Fulfilled requirements for kdump configurations and targets. For details, see Supported kdump configurations and targets . All configurations for installing kdump are set up according to your needs. For details, see Installing kdump . Procedure To stop the kdump service in the current session: To disable the kdump service: Warning It is recommended to set kptr_restrict=1 as default. When kptr_restrict is set to (1) as default, the kdumpctl service loads the crash kernel regardless of whether the Kernel Address Space Layout ( KASLR ) is enabled. If kptr_restrict is not set to 1 and KASLR is enabled, the contents of /proc/kore file are generated as all zeros. The kdumpctl service fails to access the /proc/kcore file and load the crash kernel. The kexec-kdump-howto.txt file displays a warning message, which recommends you to set kptr_restrict=1 . Verify for the following in the sysctl.conf file to ensure that kdumpctl service loads the crash kernel: Kernel kptr_restrict=1 in the sysctl.conf file. Additional resources Managing systemd | [
"grubby --update-kernel=ALL --args=\"crashkernel=xxM\"",
"systemctl enable --now kdump.service",
"systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)",
"ls -a /boot/vmlinuz- * /boot/vmlinuz-0-rescue-2930657cd0dc43c2b75db480e5e5b4a9 /boot/vmlinuz-4.18.0-330.el8.x86_64 /boot/vmlinuz-4.18.0-330.rt7.111.el8.x86_64",
"grubby --update-kernel= vmlinuz-4.18.0-330.el8.x86_64 --args=\"crashkernel= xxM \"",
"systemctl enable --now kdump.service",
"systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)",
"systemctl stop kdump.service",
"systemctl disable kdump.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/enabling-kdumpmanaging-monitoring-and-updating-the-kernel |
Chapter 11. Managing bare-metal hosts | Chapter 11. Managing bare-metal hosts When you install OpenShift Container Platform on a bare-metal cluster, you can provision and manage bare-metal nodes by using machine and machineset custom resources (CRs) for bare-metal hosts that exist in the cluster. 11.1. About bare metal hosts and nodes To provision a Red Hat Enterprise Linux CoreOS (RHCOS) bare metal host as a node in your cluster, first create a MachineSet custom resource (CR) object that corresponds to the bare metal host hardware. Bare metal host compute machine sets describe infrastructure components specific to your configuration. You apply specific Kubernetes labels to these compute machine sets and then update the infrastructure components to run on only those machines. Machine CR's are created automatically when you scale up the relevant MachineSet containing a metal3.io/autoscale-to-hosts annotation. OpenShift Container Platform uses Machine CR's to provision the bare metal node that corresponds to the host as specified in the MachineSet CR. 11.2. Maintaining bare metal hosts You can maintain the details of the bare metal hosts in your cluster from the OpenShift Container Platform web console. Navigate to Compute Bare Metal Hosts , and select a task from the Actions drop down menu. Here you can manage items such as BMC details, boot MAC address for the host, enable power management, and so on. You can also review the details of the network interfaces and drives for the host. You can move a bare metal host into maintenance mode. When you move a host into maintenance mode, the scheduler moves all managed workloads off the corresponding bare metal node. No new workloads are scheduled while in maintenance mode. You can deprovision a bare metal host in the web console. Deprovisioning a host does the following actions: Annotates the bare metal host CR with cluster.k8s.io/delete-machine: true Scales down the related compute machine set Note Powering off the host without first moving the daemon set and unmanaged static pods to another node can cause service disruption and loss of data. Additional resources Adding compute machines to bare metal 11.2.1. Adding a bare metal host to the cluster using the web console You can add bare metal hosts to the cluster in the web console. Prerequisites Install an RHCOS cluster on bare metal. Log in as a user with cluster-admin privileges. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New with Dialog . Specify a unique name for the new bare metal host. Set the Boot MAC address . Set the Baseboard Management Console (BMC) Address . Enter the user credentials for the host's baseboard management controller (BMC). Select to power on the host after creation, and select Create . Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machine replicas in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 11.2.2. Adding a bare metal host to the cluster using YAML in the web console You can add bare metal hosts to the cluster in the web console using a YAML file that describes the bare metal host. Prerequisites Install a RHCOS compute machine on bare metal infrastructure for use in the cluster. Log in as a user with cluster-admin privileges. Create a Secret CR for the bare metal host. Procedure In the web console, navigate to Compute Bare Metal Hosts . Select Add Host New from YAML . Copy and paste the below YAML, modifying the relevant fields with the details of your host: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address> 1 credentialsName must reference a valid Secret CR. The baremetal-operator cannot manage the bare metal host without a valid Secret referenced in the credentialsName . For more information about secrets and how to create them, see Understanding secrets . 2 Setting disableCertificateVerification to true disables TLS host validation between the cluster and the baseboard management controller (BMC). Select Create to save the YAML and create the new bare metal host. Scale up the number of replicas to match the number of available bare metal hosts. Navigate to Compute MachineSets , and increase the number of machines in the cluster by selecting Edit Machine count from the Actions drop-down menu. Note You can also manage the number of bare metal nodes using the oc scale command and the appropriate bare metal compute machine set. 11.2.3. Automatically scaling machines to the number of available bare metal hosts To automatically create the number of Machine objects that matches the number of available BareMetalHost objects, add a metal3.io/autoscale-to-hosts annotation to the MachineSet object. Prerequisites Install RHCOS bare metal compute machines for use in the cluster, and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to configure for automatic scaling by adding the metal3.io/autoscale-to-hosts annotation. Replace <machineset> with the name of the compute machine set. USD oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>' Wait for the new scaled machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. 11.2.4. Removing bare metal hosts from the provisioner node In certain circumstances, you might want to temporarily remove bare metal hosts from the provisioner node. For example, during provisioning when a bare metal host reboot is triggered by using the OpenShift Container Platform administration console or as a result of a Machine Config Pool update, OpenShift Container Platform logs into the integrated Dell Remote Access Controller (iDrac) and issues a delete of the job queue. To prevent the management of the number of Machine objects that matches the number of available BareMetalHost objects, add a baremetalhost.metal3.io/detached annotation to the MachineSet object. Note This annotation has an effect for only BareMetalHost objects that are in either Provisioned , ExternallyProvisioned or Ready/Available state. Prerequisites Install RHCOS bare metal compute machines for use in the cluster and create corresponding BareMetalHost objects. Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Annotate the compute machine set that you want to remove from the provisioner node by adding the baremetalhost.metal3.io/detached annotation. USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached' Wait for the new machines to start. Note When you use a BareMetalHost object to create a machine in the cluster and labels or selectors are subsequently changed on the BareMetalHost , the BareMetalHost object continues be counted against the MachineSet that the Machine object was created from. In the provisioning use case, remove the annotation after the reboot is complete by using the following command: USD oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-' Additional resources Expanding the cluster MachineHealthChecks on bare metal 11.2.5. Powering off bare-metal hosts You can power off bare-metal cluster hosts in the web console or by applying a patch in the cluster by using the OpenShift CLI ( oc ). Before you power off a host, you should mark the node as unschedulable and drain all pods and workloads from the node. Prerequisites You have installed a RHCOS compute machine on bare-metal infrastructure for use in the cluster. You have logged in as a user with cluster-admin privileges. You have configured the host to be managed and have added BMC credentials for the cluster host. You can add BMC credentials by applying a Secret custom resource (CR) in the cluster or by logging in to the web console and configuring the bare-metal host to be managed. Procedure In the web console, mark the node that you want to power off as unschedulable. Perform the following steps: Navigate to Nodes and select the node that you want to power off. Expand the Actions menu and select Mark as unschedulable . Manually delete or relocate running pods on the node by adjusting the pod deployments or scaling down workloads on the node to zero. Wait for the drain process to complete. Navigate to Compute Bare Metal Hosts . Expand the Options menu for the bare-metal host that you want to power off, and select Power Off . Select Immediate power off . Alternatively, you can patch the BareMetalHost resource for the host that you want to power off by using oc . Get the name of the managed bare-metal host. Run the following command: USD oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.provisioning.state}{"\n"}{end}' Example output master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed Mark the node as unschedulable: USD oc adm cordon <bare_metal_host> 1 1 <bare_metal_host> is the host that you want to shut down, for example, worker-2.example.com . Drain all pods on the node: USD oc adm drain <bare_metal_host> --force=true Pods that are backed by replication controllers are rescheduled to other available nodes in the cluster. Safely power off the bare-metal host. Run the following command: USD oc patch <bare_metal_host> --type json -p '[{"op": "replace", "path": "/spec/online", "value": false}]' After you power on the host, make the node schedulable for workloads. Run the following command: USD oc adm uncordon <bare_metal_host> | [
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"oc get baremetalhosts -n openshift-machine-api -o jsonpath='{range .items[*]}{.metadata.name}{\"\\t\"}{.status.provisioning.state}{\"\\n\"}{end}'",
"master-0.example.com managed master-1.example.com managed master-2.example.com managed worker-0.example.com managed worker-1.example.com managed worker-2.example.com managed",
"oc adm cordon <bare_metal_host> 1",
"oc adm drain <bare_metal_host> --force=true",
"oc patch <bare_metal_host> --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/online\", \"value\": false}]'",
"oc adm uncordon <bare_metal_host>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/scalability_and_performance/managing-bare-metal-hosts |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat Ceph Storage is a massively scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. The Red Hat Ceph Storage documentation is available at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6 . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/6.1_release_notes/introduction |
Distributed tracing | Distributed tracing OpenShift Container Platform 4.13 Configuring and using distributed tracing in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/distributed_tracing/index |
Chapter 1. Supported conversion paths | Chapter 1. Supported conversion paths Important Red Hat recommends that you seek the support of Red Hat Consulting services to ensure that the conversion process is smooth. Currently, it is possible to convert your systems from the following Linux distributions and versions to the corresponding minor version of RHEL listed in Table 1.1. Table 1.1. Supported conversion paths Source OS Source version Target OS and version Product Variant Available Conversion Methods Alma Linux 9.5 RHEL 9.5 N/A Command line, Satellite 8.10 RHEL 8.10 N/A Command line, Satellite 8.8 RHEL 8.8 EUS N/A Command line, Satellite CentOS Linux 8.5 RHEL 8.5 N/A Command line, Satellite 7.9 RHEL 7.9 Server Command line, Satellite, Red Hat Insights Oracle Linux 9.5 RHEL 9.5 N/A Command line, Satellite 8.10 RHEL 8.10 N/A Command line, Satellite 7.9 RHEL 7.9 Server Command line, Satellite Rocky Linux 9.5 RHEL 9.5 N/A Command line, Satellite 8.10 RHEL 8.10 N/A Command line, Satellite 8.8 RHEL 8.8 EUS N/A Command line, Satellite Because the last available minor version of CentOS Linux is CentOS Linux 8.5, it is not possible to convert from CentOS Linux 8 directly to the latest available minor version of RHEL 8. It is recommended to update your system to the latest version of RHEL after the conversion. RHEL 7 reaches the end of the Maintenance Support Phase on June 30, 2024. If you are converting to RHEL 7 and plan to stay on RHEL 7, it is strongly recommended to purchase the Extended Life Cycle Support (ELS) add-on subscription. If you plan to convert to RHEL 7 and then immediately upgrade to RHEL 8 or later, an ELS subscription is not needed. Note that without ELS, you have limited support for RHEL 7, including for the upgrade from RHEL 7 to RHEL 8. For more information, see the Red Hat Enterprise Linux Life Cycle and the Convert2RHEL Support Policy . In addition to the above supported conversion paths, it is also possible to perform an unsupported conversion from Scientific Linux 7, CentOS Stream 8, and CentOS Stream 9 to RHEL. For information about unsupported conversions, see How to perform an unsupported conversion from a RHEL-derived Linux distribution to RHEL . For information about Red Hat's support policy for Linux distribution conversions, see Convert2RHEL Support Policy . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/converting_from_a_linux_distribution_to_rhel_using_the_convert2rhel_utility_in_red_hat_insights/con_supported-conversion-paths_converting-from-a-linux-distribution-to-rhel-in-insights |
Chapter 8. Network File System (NFS) | Chapter 8. Network File System (NFS) A Network File System ( NFS ) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. This chapter focuses on fundamental NFS concepts and supplemental information. 8.1. Introduction to NFS Currently, there are two major versions of NFS included in Red Hat Enterprise Linux: NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than the NFSv2. It also supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data. NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an rpcbind service, supports ACLs, and utilizes stateful operations. Red Hat Enterprise Linux fully supports NFS version 4.2 (NFSv4.2) since the Red Hat Enterprise Linux 7.4 release. Following are the features of NFSv4.2 in Red Hat Enterprise Linux : Sparse Files: It verifies space efficiency of a file and allows placeholder to improve storage efficiency. It is a file having one or more holes; holes are unallocated or uninitialized data blocks consisting only of zeroes. lseek() operation in NFSv4.2, supports seek_hole() and seek_data() , which allows application to map out the location of holes in the sparse file. Space Reservation: It permits storage servers to reserve free space, which prohibits servers to run out of space. NFSv4.2 supports allocate() operation to reserve space, deallocate() operation to unreserve space, and fallocate() operation to preallocate or deallocate space in a file. Labeled NFS: It enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system. Layout Enhancements: NFSv4.2 provides new operation, layoutstats() , which the client can use to notify the metadata server about its communication with the layout. Versions of Red Hat Enterprise Linux earlier than 7.4 support NFS up to version 4.1. Following are the features of NFSv4.1: Enhances performance and security of network, and also includes client-side support for Parallel NFS (pNFS). No longer requires a separate TCP connection for callbacks, which allows an NFS server to grant delegations even when it cannot contact the client. For example, when NAT or a firewall interferes. It provides exactly once semantics (except for reboot operations), preventing a issue whereby certain operations could return an inaccurate result if a reply was lost and the operation was sent twice. NFS clients attempt to mount using NFSv4.1 by default, and fall back to NFSv4.0 when the server does not support NFSv4.1. The mount later fall back to NFSv3 when server does not support NFSv4.0. Note NFS version 2 (NFSv2) is no longer supported by Red Hat. All versions of NFS can use Transmission Control Protocol ( TCP ) running over an IP network, with NFSv4 requiring it. NFSv3 can use the User Datagram Protocol (UDP) running over an IP network to provide a stateless network connection between the client and server. When using NFSv3 with UDP, the stateless UDP connection (under normal conditions) has less protocol overhead than TCP. This can translate into better performance on very clean, non-congested networks. However, because UDP is stateless, if the server goes down unexpectedly, UDP clients continue to saturate the network with requests for the server. In addition, when a frame is lost with UDP, the entire RPC request must be retransmitted; with TCP, only the lost frame needs to be resent. For these reasons, TCP is the preferred protocol when connecting to an NFS server. The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind [1] , lockd , and rpc.statd daemons. The rpc.mountd daemon is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations. Note TCP is the default transport protocol for NFS version 3 under Red Hat Enterprise Linux. UDP can be used for compatibility purposes as needed, but is not recommended for wide usage. NFSv4 requires TCP. All the RPC/NFS daemons have a '-p' command line option that can set the port, making firewall configuration easier. After TCP wrappers grant access to the client, the NFS server refers to the /etc/exports configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user. Important In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly. The NFS initialization script and rpc.nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon. 8.1.1. Required Services Red Hat Enterprise Linux uses a combination of kernel-level support and daemon processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls ( RPC ) between clients and servers. RPC services under Red Hat Enterprise Linux 7 are controlled by the rpcbind service. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented: Note The portmap service was used to map RPC program numbers to IP address port number combinations in earlier versions of Red Hat Enterprise Linux. This service is now replaced by rpcbind in Red Hat Enterprise Linux 7 to enable IPv6 support. nfs systemctl start nfs starts the NFS server and the appropriate RPC processes to service requests for shared NFS file systems. nfslock systemctl start nfs-lock activates a mandatory service that starts the appropriate RPC processes allowing NFS clients to lock files on the server. rpcbind rpcbind accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. rpcbind responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4. The following RPC processes facilitate NFS services: rpc.mountd This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and provides the File-Handle for this NFS share back to the NFS client. rpc.nfsd rpc.nfsd allows explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs service. lockd lockd is a kernel thread which runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which allows NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted. rpc.statd This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. rpc.statd is started automatically by the nfslock service, and does not require user configuration. This is not used with NFSv4. rpc.rquotad This process provides user quota information for remote users. rpc.rquotad is started automatically by the nfs service and does not require user configuration. rpc.idmapd rpc.idmapd provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (strings in the form of user @ domain ) and local UIDs and GIDs. For idmapd to function with NFSv4, the /etc/idmapd.conf file must be configured. At a minimum, the "Domain" parameter should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the same as the DNS domain name, this parameter can be skipped. The client and server must agree on the NFSv4 mapping domain for ID mapping to function properly. Note In Red Hat Enterprise Linux 7, only the NFSv4 server uses rpc.idmapd . The NFSv4 client uses the keyring-based idmapper nfsidmap . nfsidmap is a stand-alone program that is called by the kernel on-demand to perform ID mapping; it is not a daemon. If there is a problem with nfsidmap does the client fall back to using rpc.idmapd . More information regarding nfsidmap can be found on the nfsidmap man page. [1] The rpcbind service replaces portmap , which was used in versions of Red Hat Enterprise Linux to map RPC program numbers to IP address port number combinations. For more information, refer to Section 8.1.1, "Required Services" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-nfs |
3.13. Considerations for ricci | 3.13. Considerations for ricci For Red Hat Enterprise Linux 6, ricci replaces ccsd . Therefore, it is necessary that ricci is running in each cluster node to be able to propagate updated cluster configuration whether it is by means of the cman_tool version -r command, the ccs command, or the luci user interface server. You can start ricci by using service ricci start or by enabling it to start at boot time by means of chkconfig . For information on enabling IP ports for ricci , see Section 3.3.1, "Enabling IP Ports on Cluster Nodes" . For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password the first time you propagate updated cluster configuration from any particular node. You set the ricci password as root after you install ricci on your system. To set this password, execute the passwd ricci command, for user ricci . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-ricci-considerations-ca |
Chapter 8. Basic network isolation | Chapter 8. Basic network isolation This chapter shows you how to configure the overcloud with the standard network isolation configuration. This includes the following configurations: The rendered environment file to enable network isolation ( /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml ). A copied environment file to configure network defaults ( /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml ). A network_data file to define network settings such as IP ranges, subnets, and virtual IPs. This example shows you how to create a copy of the default and edit it to suit your own network. Templates to define your NIC layout for each node. The overcloud core template collection contains a set of defaults for different use cases. An environment file to enable NICs. This example uses a default file located in the environments directory. Any additional environment files to customize your networking parameters. The following content in this chapter shows how to define each of these aspects. 8.1. Network isolation The overcloud assigns services to the provisioning network by default. However, the director can divide overcloud network traffic into isolated networks. To use isolated networks, the overcloud contains an environment file that enables this feature. The environments/network-isolation.j2.yaml file in the director's core Heat templates is a Jinja2 file that defines all ports and VIPs for each network in your composable network file. When rendered, it results in a network-isolation.yaml file in the same location with the full resource registry. For example: The first section of this file has the resource registry declaration for the OS::TripleO::Network::* resources. By default, these resources use the OS::Heat::None resource type, which does not create any networks. By redirecting these resources to the YAML files for each network, you enable the creation of these networks. The several sections create the IP addresses for the nodes in each role. The controller nodes have IPs on each network. The compute and storage nodes each have IPs on a subset of the networks. Other functions of overcloud networking, such as Chapter 9, Custom composable networks and Chapter 10, Custom network interface templates rely on this network isolation environment file. As a result, you need to include the name of the rendered file with your deployment commands. For example: 8.2. Modifying isolated network configuration The network_data file provides a method to configure the default isolated networks. This procedure shows how to create a custom network_data file and configure it according to your network requirements. Procedure Copy the default network_data file: Edit the local copy of the network_data.yaml file and modify the parameters to suit your networking requirements. For example, the Internal API network contains the following default network details: Edit the following for each network: vlan defines the VLAN ID to use for this network. ip_subnet and ip_allocation_pools set the default subnet and IP range for the network.. gateway sets the gateway for the network. Used mostly to define the default route for the External network, but can be used for other networks if necessary. Include the custom network_data file with your deployment using the -n option. Without the -n option, the deployment command uses the default network details. 8.3. Network Interface Templates The overcloud network configuration requires a set of the network interface templates. These templates are standard Heat templates in YAML format. Each role requires a NIC template so the director can configure each node within that role correctly. All NIC templates contain the same sections as standard Heat templates: heat_template_version The syntax version to use. description A string description of the template. parameters Network parameters to include in the template. resources Takes parameters defined in parameters and applies them to a network configuration script. outputs Renders the final script used for configuration. The default NIC templates in /usr/share/openstack-tripleo-heat-templates/network/config take advantage of Jinja2 syntax to help render the template. For example, the following snippet from the single-nic-vlans configuration renders a set of VLANs for each network: For default Compute nodes, this only renders network information for the Storage, Internal API, and Tenant networks: Chapter 10, Custom network interface templates explores how to render the default Jinja2-based templates to standard YAML versions, which you can use as a basis for customization. 8.4. Default network interface templates The director contains templates in /usr/share/openstack-tripleo-heat-templates/network/config/ to suit most common network scenarios. The following table outlines each NIC template set and the respective environment file to use to enable the templates. Note Each environment file for enabling NIC templates uses the suffix .j2.yaml . This is the unrendered Jinja2 version. Ensure that you include the rendered file name, which only uses the .yaml suffix, in your deployment. NIC directory Description Environment file single-nic-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Open vSwitch bridge. environments/net-single-nic-with-vlans.j2.yaml single-nic-linux-bridge-vlans Single NIC ( nic1 ) with control plane and VLANs attached to default Linux bridge. environments/net-single-nic-linux-bridge-with-vlans bond-with-vlans Control plane attached to nic1 . Default Open vSwitch bridge with bonded NIC configuration ( nic2 and nic3 ) and VLANs attached. environments/net-bond-with-vlans.yaml multiple-nics Control plane attached to nic1 . Assigns each sequential NIC to each network defined in the network_data file. By default, this is Storage to nic2 , Storage Management to nic3 , Internal API to nic4 , Tenant to nic5 on the br-tenant bridge, and External to nic6 on the default Open vSwitch bridge. environments/net-multiple-nics.yaml Note Environment files exist for using no external network, for example, net-bond-with-vlans-no-external.yaml , and using IPv6, for example, net-bond-with-vlans-v6.yaml . These are provided for backwards compatibility and do not function with composable networks. Each default NIC template set contains a role.role.j2.yaml template. This file uses Jinja2 to render additional files for each composable role. For example, if your overcloud uses Compute, Controller, and Ceph Storage roles, the deployment renders new templates based on role.role.j2.yaml , such as the following templates: compute.yaml controller.yaml ceph-storage.yaml . 8.5. Enabling basic network isolation This procedure shows you how to enable basic network isolation using one of the default NIC templates. In this case, it is the single NIC with VLANs template ( single-nic-vlans ). Procedure When running the openstack overcloud deploy command, ensure that you include the rendered environment file names for the following files: The custom network_data file. The rendered file name of the default network isolation. The rendered file name of the default network environment file. The rendered file name of the default network interface configuration Any additional environment files relevant to your configuration. For example: | [
"resource_registry: # networks as defined in network_data.yaml OS::TripleO::Network::Storage: ../network/storage.yaml OS::TripleO::Network::StorageMgmt: ../network/storage_mgmt.yaml OS::TripleO::Network::InternalApi: ../network/internal_api.yaml OS::TripleO::Network::Tenant: ../network/tenant.yaml OS::TripleO::Network::External: ../network/external.yaml # Port assignments for the VIPs OS::TripleO::Network::Ports::StorageVipPort: ../network/ports/storage.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: ../network/ports/storage_mgmt.yaml OS::TripleO::Network::Ports::InternalApiVipPort: ../network/ports/internal_api.yaml OS::TripleO::Network::Ports::ExternalVipPort: ../network/ports/external.yaml OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/vip.yaml # Port assignments by role, edit role definition to assign networks to roles. # Port assignments for the Controller OS::TripleO::Controller::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml OS::TripleO::Controller::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::Controller::Ports::TenantPort: ../network/ports/tenant.yaml OS::TripleO::Controller::Ports::ExternalPort: ../network/ports/external.yaml # Port assignments for the Compute OS::TripleO::Compute::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::Compute::Ports::InternalApiPort: ../network/ports/internal_api.yaml OS::TripleO::Compute::Ports::TenantPort: ../network/ports/tenant.yaml # Port assignments for the CephStorage OS::TripleO::CephStorage::Ports::StoragePort: ../network/ports/storage.yaml OS::TripleO::CephStorage::Ports::StorageMgmtPort: ../network/ports/storage_mgmt.yaml",
"openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml",
"cp /usr/share/openstack-tripleo-heat-templates/network_data.yaml /home/stack/.",
"- name: InternalApi name_lower: internal_api vip: true vlan: 201 ip_subnet: '172.16.2.0/24' allocation_pools: [{'start': '172.16.2.4', 'end': '172.16.2.250'}]",
"{%- for network in networks if network.enabled|default(true) and network.name in role.networks %} - type: vlan vlan_id: get_param: {{network.name}}NetworkVlanID addresses: - ip_netmask: get_param: {{network.name}}IpSubnet {%- if network.name in role.default_route_networks %}",
"- type: vlan vlan_id: get_param: StorageNetworkVlanID device: bridge_name addresses: - ip_netmask: get_param: StorageIpSubnet - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bridge_name addresses: - ip_netmask: get_param: InternalApiIpSubnet - type: vlan vlan_id: get_param: TenantNetworkVlanID device: bridge_name addresses: - ip_netmask: get_param: TenantIpSubnet",
"openstack overcloud deploy --templates -n /home/stack/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/basic-network-isolation |
3.2.3. Tuning CPUfreq Policy and Speed | 3.2.3. Tuning CPUfreq Policy and Speed Once you have chosen an appropriate CPUfreq governor, you can view CPU speed and policy information with the cpupower frequency-info command and further tune the speed of each CPU with options for cpupower frequency-set . For cpupower frequency-info , the following options are available: --freq - Shows the current speed of the CPU according to the CPUfreq core, in KHz. --hwfreq - Shows the current speed of the CPU according to the hardware, in KHz (only available as root). --driver - Shows what CPUfreq driver is used to set the frequency on this CPU. --governors - Shows the CPUfreq governors available in this kernel. If you wish to use a CPUfreq governor that is not listed in this file, refer to Procedure 3.2, "Enabling a CPUfreq Governor" in Section 3.2.2, "CPUfreq Setup" for instructions on how to do so. --affected-cpus - Lists CPUs that require frequency coordination software. --policy - Shows the range of the current CPUfreq policy, in KHz, and the currently active governor. --hwlimits - Lists available frequencies for the CPU, in KHz. For cpupower frequency-set , the following options are available: --min <freq> and --max <freq> - Set the policy limits of the CPU, in KHz. Important When setting policy limits, you should set --max before --min . --freq <freq> - Set a specific clock speed for the CPU, in KHz. You can only set a speed within the policy limits of the CPU (as per --min and --max ). --governor <gov> - Set a new CPUfreq governor. Note If you do not have the cpupowerutils package installed, CPUfreq settings can be viewed in the tunables found in /sys/devices/system/cpu/ [cpuid] /cpufreq/ . Settings and values can be changed by writing to these tunables. For example, to set the minimum clock speed of cpu0 to 360 KHz, use: | [
"echo 360000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/tuning_cpufreq_policy_and_speed |
19.7. Administering User Tasks From the Administration Portal | 19.7. Administering User Tasks From the Administration Portal 19.7.1. Adding Users and Assigning VM Portal Permissions Users must be created already before they can be added and assigned roles and permissions. The roles and permissions assigned in this procedure give the user the permission to log in to the VM Portal and to start creating virtual machines. The procedure also applies to group accounts. Adding Users and Assigning VM Portal Permissions On the header bar, click Administration Configure to open the Configure window. Click System Permissions . Click Add to open the Add System Permission to User window. Select a profile under Search . The profile is the domain you want to search. Enter a name or part of a name in the search text field, and click GO . Alternatively, click GO to view a list of all users and groups. Select the check boxes for the appropriate users or groups. Select an appropriate role to assign under Role to Assign . The UserRole role gives the user account the permission to log in to the VM Portal. Click OK . Log in to the VM Portal to verify that the user account has the permissions to log in. 19.7.2. Viewing User Information Viewing User Information Click Administration Users to display the list of authorized users. Click the user's name to open the details view, usually with the General tab displaying general information, such as the domain name, email and status of the user. The other tabs allow you to view groups, permissions, quotas, and events for the user. For example, to view the groups to which the user belongs, click the Directory Groups tab. 19.7.3. Viewing User Permissions on Resources Users can be assigned permissions on specific resources or a hierarchy of resources. You can view the assigned users and their permissions on each resource. Viewing User Permissions on Resources Find and click the resource's name to open the details view. Click the Permissions tab to list the assigned users, the user's role, and the inherited permissions for the selected resource. 19.7.4. Removing Users When a user account is no longer required, remove it from Red Hat Virtualization. Removing Users Click Administration Users to display the list of authorized users. Select the user to be removed. Ensure the user is not running any virtual machines. Click Remove , then click OK . The user is removed from Red Hat Virtualization, but not from the external directory. 19.7.5. Viewing Logged-In Users You can view the users who are currently logged in, along with session times and other details. Click Administration Active User Sessions to view the Session DB ID , User Name , Authorization provider , User id , Source IP , Session Start Time , and Session Last Active Time for each logged-in user. 19.7.6. Terminating a User Session You can terminate the session of a user who is currently logged in. Terminating a User Session Click Administration Active User Sessions . Select the user session to be terminated. Click Terminate Session . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Red_Hat_Enterprise_Virtualization_Manager_User_Tasks |
Chapter 5. Installing Red Hat Ansible Automation Platform Operator from the OpenShift Container Platform CLI | Chapter 5. Installing Red Hat Ansible Automation Platform Operator from the OpenShift Container Platform CLI Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc command. 5.1. Prerequisites Access to Red Hat OpenShift Container Platform using an account with operator installation permissions. The OpenShift Container Platform CLI oc command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information. 5.2. Subscribing a namespace to an operator using the OpenShift Container Platform CLI Use this procedure to subscribe a namespace to an operator. Important You cannot deploy Ansible Automation Platform in the default namespace on your OpenShift Cluster. The aap namespace is recommended. You can use a custom namespace, but it should run only Ansible Automation Platform. You can only subscribe a single instance of the Ansible Automation Platform Operator into a single namespace. Subscribing multiple instances in the same namespace can lead to improper operation for both operator instances. Procedure Create a project for the operator oc new-project ansible-automation-platform Create a file called sub.yaml . Add the following YAML code to the sub.yaml file. --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.4' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: example namespace: ansible-automation-platform spec: replicas: 1 This file creates a Subscription object called ansible-automation-platform that subscribes the ansible-automation-platform namespace to the ansible-automation-platform-operator operator. It then creates an AutomationController object called example in the ansible-automation-platform namespace. To change the automation controller name from example , edit the name field in the kind: AutomationController section of sub.yaml and replace <automation_controller_name> with the name you want to use: apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: <automation_controller_name> namespace: ansible-automation-platform Run the oc apply command to create the objects specified in the sub.yaml file: oc apply -f sub.yaml To verify that the namespace has been successfully subscribed to the ansible-automation-platform-operator operator, run the oc get subs command: USD oc get subs -n ansible-automation-platform For further information about subscribing namespaces to operators, see Installing from OperatorHub using the CLI in the Red Hat OpenShift Container Platform Operators guide. You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created. 5.3. Fetching Automation controller login details from the OpenShift Container Platform CLI To login to the Automation controller, you need the web address and the password. 5.3.1. Fetching the automation controller web address A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the automation controller instance, a route was created for it. The route inherits the name that you assigned to the automation controller object in the YAML file. Use the following command to fetch the routes: oc get routes -n <controller_namespace> In the following example, the example automation controller is running in the ansible-automation-platform namespace. USD oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None The address for the automation controller instance is example-ansible-automation-platform.apps-crc.testing . 5.3.2. Fetching the automation controller password The YAML block for the automation controller instance in sub.yaml assigns values to the name and admin_user keys. Use these values in the following command to fetch the password for the automation controller instance. oc get secret/<controller_name>-<admin_user>-password -o yaml The default value for admin_user is admin . Modify the command if you changed the admin username in sub.yaml . The following example retrieves the password for an automation controller object called example : oc get secret/example-admin-password -o yaml The password for the automation controller instance is listed in the metadata field in the output: USD oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Secret","metadata":{"labels":{"app.kubernetes.io/component":"automationcontroller","app.kubernetes.io/managed-by":"automationcontroller-operator","app.kubernetes.io/name":"example","app.kubernetes.io/operator-version":"","app.kubernetes.io/part-of":"example"},"name":"example-admin-password","namespace":"ansible-automation-platform"},"stringData":{"password":"88TG88TG88TG88TG88TG88TG88TG88TG"}}' creationTimestamp: "2021-11-03T00:02:24Z" labels: app.kubernetes.io/component: automationcontroller app.kubernetes.io/managed-by: automationcontroller-operator app.kubernetes.io/name: example app.kubernetes.io/operator-version: "" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform resourceVersion: "185185" uid: 39393939-5252-4242-b929-665f665f665f For this example, the password is 88TG88TG88TG88TG88TG88TG88TG88TG . 5.4. Additional resources For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide. | [
"new-project ansible-automation-platform",
"--- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.4' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: example namespace: ansible-automation-platform spec: replicas: 1",
"apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: <automation_controller_name> namespace: ansible-automation-platform",
"apply -f sub.yaml",
"oc get subs -n ansible-automation-platform",
"get routes -n <controller_namespace>",
"oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None",
"get secret/<controller_name>-<admin_user>-password -o yaml",
"get secret/example-admin-password -o yaml",
"oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{\"apiVersion\":\"v1\",\"kind\":\"Secret\",\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"automationcontroller\",\"app.kubernetes.io/managed-by\":\"automationcontroller-operator\",\"app.kubernetes.io/name\":\"example\",\"app.kubernetes.io/operator-version\":\"\",\"app.kubernetes.io/part-of\":\"example\"},\"name\":\"example-admin-password\",\"namespace\":\"ansible-automation-platform\"},\"stringData\":{\"password\":\"88TG88TG88TG88TG88TG88TG88TG88TG\"}}' creationTimestamp: \"2021-11-03T00:02:24Z\" labels: app.kubernetes.io/component: automationcontroller app.kubernetes.io/managed-by: automationcontroller-operator app.kubernetes.io/name: example app.kubernetes.io/operator-version: \"\" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform resourceVersion: \"185185\" uid: 39393939-5252-4242-b929-665f665f665f"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/installing-aap-operator-cli |
Install | Install builds for Red Hat OpenShift 1.1 Installing Builds Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/install/index |
Chapter 10. Reclaiming space on target volumes | Chapter 10. Reclaiming space on target volumes The deleted files or chunks of zero data Sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is executed on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is executed when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note The reclaim space operation is supported only by the Ceph RBD volumes. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using Annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 10.1. Enabling reclaim space operation using Annotating PersistentVolumeClaims Use this procedure to annotate PersistentVolumeClaims so that it can invoke the reclaim space operation automatically based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the persistent volume claim (PVC) details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 10.2. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 10.3. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 10.4. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. | [
"oc get pvc data-pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s",
"oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true",
"persistentvolumeclaim/data-pvc annotated",
"oc get reclaimspacecronjobs.csiaddons.openshift.io",
"NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360",
"apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid",
"Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z",
"apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"",
"delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\""
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/managing_and_allocating_storage_resources/reclaiming-space-on-target-volumes_rhodf |
Chapter 2. About Red Hat OpenShift GitOps | Chapter 2. About Red Hat OpenShift GitOps Red Hat OpenShift GitOps is an Operator that uses Argo CD as the declarative GitOps engine. It enables GitOps workflows across multicluster OpenShift and Kubernetes infrastructure. Using Red Hat OpenShift GitOps, administrators can consistently configure and deploy Kubernetes-based infrastructure and applications across clusters and development lifecycles. Red Hat OpenShift GitOps is based on the open source project Argo CD and provides a similar set of features to what the upstream offers, with additional automation, integration into Red Hat OpenShift Container Platform and the benefits of Red Hat's enterprise support, quality assurance and focus on enterprise security. Note Because Red Hat OpenShift GitOps releases on a different cadence from OpenShift Container Platform, the Red Hat OpenShift GitOps documentation is now available as separate documentation sets for each minor version of the product. The Red Hat OpenShift GitOps documentation is available at https://docs.openshift.com/gitops/ . Documentation for specific versions is available using the version selector dropdown, or directly by adding the version to the URL, for example, https://docs.openshift.com/gitops/1.8 . In addition, the Red Hat OpenShift GitOps documentation is also available on the Red Hat Portal at https://access.redhat.com/documentation/en-us/red_hat_openshift_gitops/ . For additional information about the Red Hat OpenShift GitOps life cycle and supported platforms, refer to the Platform Life Cycle Policy . Red Hat OpenShift GitOps ensures consistency in applications when you deploy them to different clusters in different environments, such as: development, staging, and production. Red Hat OpenShift GitOps organizes the deployment process around the configuration repositories and makes them the central element. It always has at least two repositories: Application repository with the source code Environment configuration repository that defines the desired state of the application These repositories contain a declarative description of the infrastructure you need in your specified environment. They also contain an automated process to make your environment match the described state. Red Hat OpenShift GitOps uses Argo CD to maintain cluster resources. Argo CD is an open-source declarative tool for the continuous deployment (CD) of applications. Red Hat OpenShift GitOps implements Argo CD as a controller so that it continuously monitors application definitions and configurations defined in a Git repository. Then, Argo CD compares the specified state of these configurations with their live state on the cluster. Argo CD reports any configurations that deviate from their specified state. These reports allow administrators to automatically or manually resync configurations to the defined state. Therefore, Argo CD enables you to deliver global custom resources, like the resources that are used to configure OpenShift Container Platform clusters. 2.1. Key features Red Hat OpenShift GitOps helps you automate the following tasks: Ensure that the clusters have similar states for configuration, monitoring, and storage Apply or revert configuration changes to multiple OpenShift Container Platform clusters Associate templated configuration with different environments Promote applications across clusters, from staging to production 2.2. Glossary of common terms for OpenShift GitOps This glossary defines common OpenShift GitOps terms. Application Controller (Argo CD Application Controller) A controller that that performs the following actions: Continuously watches the Git repository for changes Monitors running applications Compares the live state against the desired target state Deploys new changes Examples include Argo CD Application Controller detecting an OutOfSync application state and optionally taking corrective action. Application custom resource (CR) A YAML manifest that describes how you intend to deploy the resources of your Argo CD application. Application custom resource definition (CRD) A resource object representing a deployed Argo CD application instance in an environment. ApplicationSet CRD (Argo CD application set) A resource object and a CRD that automatically generates Argo CD applications based on the contents of an ApplicationSet CR. Cluster administrators use this CRD to define a single ApplicationSet CR to generate and update multiple corresponding Argo CD Application CRs. ApplicationSet Controller (Argo CD ApplicationSet Controller) A custom Kubernetes controller that exists within Argo CD and processes ApplicationSet CRs. This controller automatically creates, updates, and deletes Argo CD applications based on the contents of an ApplicationSet CR. AppProject CRD A CRD representing a logical grouping of applications within a project that governs where and how an application is allowed to manage resources. You can use the AppProject CRD to restrict where and how Argo CD users are allowed to access those applications. Managing the AppProject instances is an action typically restricted to Argo CD administrators. Argo CD API server A gRPC/REST server that exposes the API consumed by the web UI, CLI, continuous integration (CI), and continuous deployment (CD) systems. Argo CD An open-source declarative tool that automates the continuous deployment of Kubernetes-based infrastructure and applications across clusters and development lifecycles. Argo CD application An application that tracks the continuous deployment of individual Kubernetes resources from the GitOps repository, where the resources are defined as manifests, to a target Kubernetes cluster. ArgoCD CRD A Kubernetes CRD that describes the wanted state for a given Argo CD cluster that allows you to configure the components that make up an Argo CD cluster. Argo CD instance A single installation of Argo CD within a namespace that encapsulates all of the stateful aspects of a running Argo CD. Each Argo CD instance usually has a one-to-one mapping with an ArgoCD CR. Argo CD project An entity within Argo CD that refers to the Argo CD open source project's specific concept of projects , and the corresponding AppProject CR . An Argo CD project lets you define multiple namespaces and even clusters as allowed destinations. In contrast, an OpenShift project is restricted to a single namespace and is equivalent in concept to a namespace. Argo CD project controls the behavior of Argo CD by restricting access to Git repositories and remote clusters. Examples include using the Argo CD project to control users by restricting who can access certain Argo CD applications or cluster resources through the Argo CD UI or Argo CD CLI. Argo CD repository server (Argo CD-repo-server) An Argo CD component that performs the following actions: Reads from source repositories such as Git, Helm, or Open Container Initiative (OCI) Generates corresponding application manifests Runs custom configuration management tooling Returns the result to the Argo CD Application Controller Argo CD resource ( ArgoCD CR) A CR that describes the wanted state for a given Argo CD instance. It allows you to configure the components and settings that make up an Argo CD instance. At any given time, you can have only one ArgoCD CR within a namespace. Argo CD server (Argo CD-server) A server that provides the API and UI for Argo CD. Argo Rollouts A controller that you can use for managing the progressive deployment of applications hosted on Kubernetes and OpenShift Container Platform clusters. This controller has a set of CRDs that provides advanced deployment capabilities such as blue-green, canary, canary analysis, and experimentation. Cluster-scoped instance A mode in which Argo CD is configured to manage all resources on the cluster including certain cluster-specific resources such as cluster configuration, cluster RBAC, Operator resources, platform Operators, or secrets. Control plane (GitOps control plane) In the GitOps context, you can have a control plane for every Argo CD you install. A GitOps control plane is any namespace where you can install an Argo CD. This control plane allows you to provision, manage, and operate Argo CD across networks, instances, and clusters. Within a control plane namespace, Argo CD maintains a set of following Kubernetes resources, which define the continuous deployment between the source Git repository and destination clusters: Argo CD Application CRs ConfigMap API objects Secret objects representing the GitOps repository credentials and cluster credentials for deployment targets openshift-gitops is the control plane namespace for the default Argo CD instance. Declarative setup A declarative description of the infrastructure required in your specified environment, for system and application setup or configuration. You can specify this description in a YAML configuration file in the Git repository. The declarative setup contains an automated process to make your environment and infrastucture match the described state. Examples include defining Argo CD applications, projects, and settings declaratively by using YAML manifests. Default Argo CD instance (Default cluster-scoped instance) A default instance that a Red Hat OpenShift GitOps Operator instantiates immediately after its installation, in the openshift-gitops namespace, with additional permissions for managing certain cluster-scoped resources. GitOps A declarative way to implement continuous deployment for cloud native applications. In GitOps, a Git repository contains deployment resources, which Argo CD keeps synchronizing with its cluster state. GitOps CLI (GitOps argocd CLI) A tool to configure and manage Red Hat OpenShift GitOps and Argo CD resources from the command line. Instance scopes Modes that determine how you want to operate an Argo CD instance. The available modes are cluster-scoped instance and namespace-scoped instance . Live state The live state of application resources on a target cluster. Local cluster A cluster where you install Argo CD. Manifest In the GitOps context, a manifest is a YAML representation of Kubernetes resources defined within a GitOps repository, with the intent to deploy those resources to a target Kubernetes cluster. Examples include the YAML representation of resources such as Deployment , ConfigMap , or Secret . Multitenancy A software architecture where a single software instance serves multiple distinct user groups. Namespace-scoped instance (Application delivery instance) A mode in which Argo CD is configured to manage resources in only certain namespaces on a cluster and use the resources for application delivery. Notifications Controller (Argo CD Notifications Controller) A controller that continuously monitors Argo CD applications and provides a flexible way to notify users about important changes in the application state. Progressive delivery In the GitOps context, progressive delivery is a process of releasing application updates in a controlled and gradual manner. Red Hat OpenShift GitOps An Operator that uses Argo CD as the declarative GitOps engine to enable GitOps workflows across multicluster OpenShift and Kubernetes infrastructures. Refresh The process of comparing the latest code in the Git repository with the live state and determining the difference. For example, in the Argo CD UI, when you click Refresh , Argo CD connects to an application's target Git repository, retrieves the content, and then generates manifests from that content. Argo CD then compares that target state against the live cluster state. Remote cluster A cluster that you can add to Argo CD either declaratively or by using the GitOps CLI. Remote cluster is distinct from the local cluster where Argo CD is installed. Resource Exclusion A configuration you use to exclude resources from discovery and sync so that Argo CD is unaware of them. Resource Inclusion A configuration you use to include resources to discover, sync, and restrict the list of managed resources globally. Single tenancy A software architecture where a single software instance serves a single user or group of users. Sync The process of synchronizing the live state of an application's cluster resources with the target state defined within the Git repository to ensure consistency. Examples include syncing an application by applying changes to a cluster by using the Argo CD UI. Sync status The status of an application that indicates whether the live state matches the target state. Target state The wanted state of application resources, as represented by files in a Git repository. User-defined Argo CD instance A custom Argo CD instance that you install and deploy to manage cluster configurations or deploy applications. By default, any new user-defined instance has permissions to manage resources only in the namespace where it is deployed. You can create a user-defined Argo CD instance in any namespace, other than the openshift-gitops namespace. Workload Any process, usually defined within resources such as Deployment , StatefulSet , ReplicaSet , Job , or Pod , running within a container. Examples include a Spring Boot application, a NodeJS Express application, or a Ruby on Rails application. 2.3. Additional resources Extending the Kubernetes API with custom resource definitions Managing resources from custom resource definitions What is GitOps? What is an OpenShift project? Specification of an AppProject CRD | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/understanding_openshift_gitops/about-redhat-openshift-gitops |
3.3. Unmounting a GFS2 File System | 3.3. Unmounting a GFS2 File System GFS2 file systems that have been mounted manually rather than automatically through Pacemaker will not be known to the system when file systems are unmounted at system shutdown. As a result, the GFS2 script will not unmount the GFS2 file system. After the GFS2 shutdown script is run, the standard shutdown process kills off all remaining user processes, including the cluster infrastructure, and tries to unmount the file system. This unmount will fail without the cluster infrastructure and the system will hang. To prevent the system from hanging when the GFS2 file systems are unmounted, you should do one of the following: Always use Pacemaker to manage the GFS2 file system. For information on configuring a GFS2 file system in a Pacemaker cluster, see Chapter 5, Configuring a GFS2 File System in a Cluster . If a GFS2 file system has been mounted manually with the mount command, be sure to unmount the file system manually with the umount command before rebooting or shutting down the system. If your file system hangs while it is being unmounted during system shutdown under these circumstances, perform a hardware reboot. It is unlikely that any data will be lost since the file system is synced earlier in the shutdown process. The GFS2 file system can be unmounted the same way as any Linux file system, by using the umount command. Note The umount command is a Linux system command. Information about this command can be found in the Linux umount command man pages. Usage MountPoint Specifies the directory where the GFS2 file system is currently mounted. | [
"umount MountPoint"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/s1-manage-unmountfs |
4.164. lohit-tamil-fonts | 4.164. lohit-tamil-fonts 4.164.1. RHEA-2011:1139 - lohit-tamil-fonts enhancement update An updated lohit-tamil-fonts package which adds one enhancement is now available for Red Hat Enterprise Linux 6. The lohit-tamil-fonts package provides a free Tamil TrueType/OpenType font. Enhancement BZ# 691295 Unicode 6.0, the most recent major version of the Unicode standard, introduces the Indian Rupee Sign (U+20B9), the new official Indian currency symbol. With this update, the lohit-tamil-fonts package now includes a glyph for this new character. All users requiring the Indian rupee sign should install this updated package, which adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/lohit-tamil-fonts |
Chapter 3. alt-java and java uses | Chapter 3. alt-java and java uses Depending on your needs, you can use either the alt-java binary or the java binary to run your application's code. 3.1. alt-java usage Use alt-java for any applications that run untrusted code. Be aware that using alt-java is not a solution to all speculative execution vulnerabilities. 3.2. java usage Use the java binary for performance-critical tasks in a secure environment. Most RPMs in a Red Hat Enterprise Linux system use the java binary, except for IcedTea-Web. IcedTea-Web uses alt-java as its launcher, so you can use IcedTea-Web to run untrusted code. Additional resources See Java and Speculative Execution Vulnerabilities . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_alt-java_with_red_hat_build_of_openjdk/using-java-and-altjava |
Chapter 1. What is GitOps? | Chapter 1. What is GitOps? GitOps is a declarative way to implement continuous deployment for cloud native applications. You can use GitOps to create repeatable processes for managing OpenShift Container Platform clusters and applications across multi-cluster Kubernetes environments. GitOps handles and automates complex deployments at a fast pace, saving time during deployment and release cycles. The GitOps workflow pushes an application through development, testing, staging, and production. GitOps either deploys a new application or updates an existing one, so you only need to update the repository; GitOps automates everything else. GitOps is a set of practices that use Git pull requests to manage infrastructure and application configurations. In GitOps, the Git repository is the only source of truth for system and application configuration. This Git repository contains a declarative description of the infrastructure you need in your specified environment and contains an automated process to make your environment match the described state. Also, it contains the entire state of the system so that the trail of changes to the system state are visible and auditable. By using GitOps, you resolve the issues of infrastructure and application configuration sprawl. GitOps defines infrastructure and application definitions as code. Then, it uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. By following the principles of the code, you can store the configuration of clusters and applications in Git repositories, and then follow the Git workflow to apply these repositories to your chosen clusters. You can apply the core principles of developing and maintaining software in a Git repository to the creation and management of your cluster and application configuration files. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/understanding_openshift_gitops/what-is-gitops |
Chapter 3. Updating GitOps ZTP | Chapter 3. Updating GitOps ZTP You can update the GitOps Zero Touch Provisioning (ZTP) infrastructure independently from the hub cluster, Red Hat Advanced Cluster Management (RHACM), and the managed OpenShift Container Platform clusters. Note You can update the Red Hat OpenShift GitOps Operator when new versions become available. When updating the GitOps ZTP plugin, review the updated files in the reference configuration and ensure that the changes meet your requirements. 3.1. Overview of the GitOps ZTP update process You can update GitOps Zero Touch Provisioning (ZTP) for a fully operational hub cluster running an earlier version of the GitOps ZTP infrastructure. The update process avoids impact on managed clusters. Note Any changes to policy settings, including adding recommended content, results in updated polices that must be rolled out to the managed clusters and reconciled. At a high level, the strategy for updating the GitOps ZTP infrastructure is as follows: Label all existing clusters with the ztp-done label. Stop the ArgoCD applications. Install the new GitOps ZTP tools. Update required content and optional changes in the Git repository. Update and restart the application configuration. 3.2. Preparing for the upgrade Use the following procedure to prepare your site for the GitOps Zero Touch Provisioning (ZTP) upgrade. Procedure Get the latest version of the GitOps ZTP container that has the custom resources (CRs) used to configure Red Hat OpenShift GitOps for use with GitOps ZTP. Extract the argocd/deployment directory by using the following commands: USD mkdir -p ./update USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15 extract /home/ztp --tar | tar x -C ./update The /update directory contains the following subdirectories: update/extra-manifest : contains the source CR files that the SiteConfig CR uses to generate the extra manifest configMap . update/source-crs : contains the source CR files that the PolicyGenTemplate CR uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. update/argocd/deployment : contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. update/argocd/example : contains example SiteConfig and PolicyGenTemplate files that represent the recommended configuration. Update the clusters-app.yaml and policies-app.yaml files to reflect the name of your applications and the URL, branch, and path for your Git repository. If the upgrade includes changes that results in obsolete policies, the obsolete policies should be removed prior to performing the upgrade. Diff the changes between the configuration and deployment source CRs in the /update folder and Git repo where you manage your fleet site CRs. Apply and push the required changes to your site repository. Important When you update GitOps ZTP to the latest version, you must apply the changes from the update/argocd/deployment directory to your site repository. Do not use older versions of the argocd/deployment/ files. 3.3. Labeling the existing clusters To ensure that existing clusters remain untouched by the tool updates, label all existing managed clusters with the ztp-done label. Note This procedure only applies when updating clusters that were not provisioned with Topology Aware Lifecycle Manager (TALM). Clusters that you provision with TALM are automatically labeled with ztp-done . Procedure Find a label selector that lists the managed clusters that were deployed with GitOps Zero Touch Provisioning (ZTP), such as local-cluster!=true : USD oc get managedcluster -l 'local-cluster!=true' Ensure that the resulting list contains all the managed clusters that were deployed with GitOps ZTP, and then use that selector to add the ztp-done label: USD oc label managedcluster -l 'local-cluster!=true' ztp-done= 3.4. Stopping the existing GitOps ZTP applications Removing the existing applications ensures that any changes to existing content in the Git repository are not rolled out until the new version of the tools is available. Use the application files from the deployment directory. If you used custom names for the applications, update the names in these files first. Procedure Perform a non-cascaded delete on the clusters application to leave all generated resources in place: USD oc delete -f update/argocd/deployment/clusters-app.yaml Perform a cascaded delete on the policies application to remove all policies: USD oc patch -f policies-app.yaml -p '{"metadata": {"finalizers": ["resources-finalizer.argocd.argoproj.io"]}}' --type merge USD oc delete -f update/argocd/deployment/policies-app.yaml 3.5. Required changes to the Git repository When upgrading the ztp-site-generate container from an earlier release of GitOps Zero Touch Provisioning (ZTP) to 4.10 or later, there are additional requirements for the contents of the Git repository. Existing content in the repository must be updated to reflect these changes. Make required changes to PolicyGenTemplate files: All PolicyGenTemplate files must be created in a Namespace prefixed with ztp . This ensures that the GitOps ZTP application is able to manage the policy CRs generated by GitOps ZTP without conflicting with the way Red Hat Advanced Cluster Management (RHACM) manages the policies internally. Add the kustomization.yaml file to the repository: All SiteConfig and PolicyGenTemplate CRs must be included in a kustomization.yaml file under their respective directory trees. For example: ├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml Note The files listed in the generator sections must contain either SiteConfig or PolicyGenTemplate CRs only. If your existing YAML files contain other CRs, for example, Namespace , these other CRs must be pulled out into separate files and listed in the resources section. The PolicyGenTemplate kustomization file must contain all PolicyGenTemplate YAML files in the generator section and Namespace CRs in the resources section. For example: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml The SiteConfig kustomization file must contain all SiteConfig YAML files in the generator section and any other CRs in the resources: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml Remove the pre-sync.yaml and post-sync.yaml files. In OpenShift Container Platform 4.10 and later, the pre-sync.yaml and post-sync.yaml files are no longer required. The update/deployment/kustomization.yaml CR manages the policies deployment on the hub cluster. Note There is a set of pre-sync.yaml and post-sync.yaml files under both the SiteConfig and PolicyGenTemplate trees. Review and incorporate recommended changes Each release may include additional recommended changes to the configuration applied to deployed clusters. Typically these changes result in lower CPU use by the OpenShift platform, additional features, or improved tuning of the platform. Review the reference SiteConfig and PolicyGenTemplate CRs applicable to the types of cluster in your network. These examples can be found in the argocd/example directory extracted from the GitOps ZTP container. 3.6. Installing the new GitOps ZTP applications Using the extracted argocd/deployment directory, and after ensuring that the applications point to your site Git repository, apply the full contents of the deployment directory. Applying the full contents of the directory ensures that all necessary resources for the applications are correctly configured. Procedure To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Add the following configuration to the out/argocd/deployment/argocd-openshift-gitops-patch.json file: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment 3.7. Rolling out the GitOps ZTP configuration changes If any configuration changes were included in the upgrade due to implementing recommended changes, the upgrade process results in a set of policy CRs on the hub cluster in the Non-Compliant state. With the GitOps Zero Touch Provisioning (ZTP) version 4.10 and later ztp-site-generate container, these policies are set to inform mode and are not pushed to the managed clusters without an additional step by the user. This ensures that potentially disruptive changes to the clusters can be managed in terms of when the changes are made, for example, during a maintenance window, and how many clusters are updated concurrently. To roll out the changes, create one or more ClusterGroupUpgrade CRs as detailed in the TALM documentation. The CR must contain the list of Non-Compliant policies that you want to push out to the managed clusters as well as a list or selector of which clusters should be included in the update. Additional resources For information about the Topology Aware Lifecycle Manager (TALM), see About the Topology Aware Lifecycle Manager configuration . For information about creating ClusterGroupUpgrade CRs, see About the auto-created ClusterGroupUpgrade CR for ZTP . | [
"mkdir -p ./update",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.15 extract /home/ztp --tar | tar x -C ./update",
"oc get managedcluster -l 'local-cluster!=true'",
"oc label managedcluster -l 'local-cluster!=true' ztp-done=",
"oc delete -f update/argocd/deployment/clusters-app.yaml",
"oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge",
"oc delete -f update/argocd/deployment/policies-app.yaml",
"├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml",
"{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json",
"oc apply -k out/argocd/deployment"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/edge_computing/ztp-updating-gitops |
Chapter 5. API requests in various languages | Chapter 5. API requests in various languages You can review the following examples of sending API requests to Red Hat Satellite from curl, Ruby, or Python. 5.1. Calling the API in curl You can use curl with the Satellite API to perform various tasks. Red Hat Satellite requires the use of HTTPS, and by default a certificate for host identification. If you have not added the Satellite Server certificate as described in Section 4.1, "SSL authentication overview" , then you can use the --insecure option to bypass certificate checks. For user authentication, you can use the --user option to provide Satellite user credentials in the form --user username:password or, if you do not include the password, the command prompts you to enter it. To reduce security risks, do not include the password as part of the command, because it then becomes part of your shell history. Examples in this section include the password only for the sake of simplicity. Be aware that if you use the --silent option, curl does not display a progress meter or any error messages. Examples in this chapter use the Python json.tool module to format the output. 5.1.1. Passing JSON data to the API request You can pass data to Satellite Server with the API request. The data must be in JSON format. When specifying JSON data with the --data option, you must set the following HTTP headers with the --header option: Use one of the following options to include data with the --data option. JSON-formatted string Enclose the quoted JSON-formatted data in curly braces {} . When passing a value for a JSON type parameter, you must escape quotation marks " with backslashes \ . For example, within curly braces, you must format "Example JSON Variable" as \"Example JSON Variable\" : JSON-formatted file The unquoted JSON-formatted data enclosed in a file and specified by the @ sign and the filename. For example: Using external files for JSON formatted data has the following advantages: You can use your favorite text editor. You can use syntax checker to find and avoid mistakes. You can use tools to check the validity of JSON data or to reformat it. Use the json_verify tool to check the validity of the JSON file: 5.1.2. Retrieving a list of resources This section outlines how to use curl with the Satellite 6 API to request information from Satellite. These examples include both requests and responses. Expect different results for each deployment. Listing users This example is a basic request that returns a list of Satellite resources, Satellite users in this case. Such requests return a list of data wrapped in metadata, while other request types only return the actual object. Example request: Example response: 5.1.3. Creating and modifying resources You can use curl to manipulate resources on your Satellite Server. API calls to Satellite require data in json format. For more information, see Section 5.1.1, "Passing JSON data to the API request" . Creating a user This example creates a user by providing required information in the --data option. Example request: Modifying a user This example modifies given name and login of the test_user that was created in Creating a user . Example request: 5.2. Calling the API in Ruby You can use Ruby with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. 5.2.1. Creating objects by using Ruby This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three lifecycle environments in the organization. If the organization already exists, the script uses that organization. If any of the lifecycle environments already exist in the organization, the script raises an error and quits. #!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = "#{url}/katello/api/v2/" USDusername = 'admin' USDpassword = 'changeme' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Performs a GET by using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end # Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end # Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = get_json("#{katello_url}/organizations") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = post_json("#{katello_url}/organizations", JSON.generate({"name"=> org_name}))["id"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's lifecycle environments envs = get_json("#{katello_url}/organizations/#{org_id}/environments") env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = post_json("#{katello_url}/organizations/#{org_id}/environments", JSON.generate({"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id}))["id"] end 5.2.2. Using apipie bindings with Ruby Apipie bindings are the Ruby bindings for apipie documented API calls. They fetch and cache the API definition from Satellite and then generate API calls as needed. #!/usr/bin/ruby require 'apipie-bindings' org_name = "MyOrg" environments = [ "Development", "Testing", "Production" ] # Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) # Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end # Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end # Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts "Creating organization: \t#{org_name}" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts "Organization \"#{org_name}\" exists" end # Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key("Library") # Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts "ERROR: One of the Environments is not unique to organization" exit end end # Create life cycle environments environments.each do |environment| puts "Creating environment: \t#{environment}" prior_env_id = call_api(:lifecycle_environments, :create, {"name" => environment, "organization_id" => org_id, "prior_id" => prior_env_id })['id'] end 5.3. Calling the API in Python You can use Python with the Satellite API to perform various tasks. Important These are example scripts and commands. Ensure you review these scripts carefully before use, and replace any variables, user names, passwords, and other information to suit your own deployment. Example scripts in this section do not use SSL verification for interacting with the REST API. 5.3.1. Creating objects by using Python This script connects to the Red Hat Satellite 6 API and creates an organization, and then creates three environments in the organization. If the organization already exists, the script uses that organization. If any of the environments already exist in the organization, the script raises an error and quits. Python 3 example #!/usr/bin/python3 import json import sys try: import requests except ImportError: print("Please install the python-requests module.") sys.exit(-1) # URL to your Satellite Server URL = "https://satellite.example.com" FOREMAN_API = f"{URL}/api/" KATELLO_API = f"{URL}/katello/api/" POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "changeme" # Ignore SSL for now SSL_VERIFY = False # Name of the organization to be either created or used ORG_NAME = "MyOrg" # Name for life cycle environments to be either created or used ENVIRONMENTS = ["Development", "Testing", "Production"] def get_json(location): """ Performs a GET by using the passed URL location """ r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): """ Performs a POST and passes the data to the URL location """ result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS ) return result.json() def main(): """ Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. """ # Check if our organization already exists org = get_json(f"{FOREMAN_API}/organizations/{ORG_NAME}") # If our organization is not found, create it if org.get('error', None): org_id = post_json( f"{FOREMAN_API}/organizations/", json.dumps({"name": ORG_NAME}) )["id"] print("Creating organization:\t" + ORG_NAME) else: # Our organization exists, so let's grab it org_id = org['id'] print(f"Organization '{ORG_NAME}' exists.") # Now, let's fetch all available life cycle environments for this org... envs = get_json( f"{KATELLO_API}/organizations/{org_id}/environments/" ) # ...and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == "Library" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print("ERROR: One of the Environments is not unique to organization") sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( f"{KATELLO_API}/organizations/{org_id}/environments/", json.dumps({ "name": environment, "organization_id": org_id, "prior": prior_env_id }) )["id"] print("Creating environment:\t" + environment) prior_env_id = new_env_id if __name__ == "__main__": main() 5.3.2. Requesting information from the API by using Python This is an example script that uses Python for various API requests. Python 3 example #!/usr/bin/env python3 import json import sys try: import requests except ImportError: print("Please install the python-requests module.") sys.exit(-1) HOSTNAME = "satellite.example.com" # URL for the API to your Satellite Server FOREMAN_API = f"https://{HOSTNAME}/api/" KATELLO_API = f"https://{HOSTNAME}/katello/api/v2/" POST_HEADERS = {'content-type': 'application/json'} # Default credentials to login to Satellite 6 USERNAME = "admin" PASSWORD = "password" # Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = "./path/to/CA-certificate.crt" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET by using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print("Error: " + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print("No results found") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}") for host in hosts: print(f"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}") def display_info_for_subs(url): subs = get_results(url) if subs: print(f"{'ID':10}{'Name':90}{'Start Date':30}") for sub in subs: print(f"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}") def main(): host = HOSTNAME print(f"Displaying all info for host {host} ...") display_all_results(FOREMAN_API + 'hosts/' + host) print(f"Displaying all facts for host {host} ...") display_all_results(FOREMAN_API + f'hosts/{host}/facts') host_pattern = 'example' print(f"Displaying basic info for hosts matching pattern '{host_pattern}'...") display_info_for_hosts(FOREMAN_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f"Displaying basic info for subscriptions") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f"Displaying basic info for hosts in environment {environment}...") display_info_for_hosts(FOREMAN_API + 'hosts?search=environment=' + environment) if __name__ == "__main__": main() | [
"--header \"Accept:application/json\" --header \"Content-Type:application/json\"",
"--data {\"id\":44, \"smart_class_parameter\":{\"override\":\"true\", \"parameter_type\":\"json\", \"default_value\":\"{\\\"GRUB_CMDLINE_LINUX\\\": {\\\"audit\\\":\\\"1\\\",\\\"crashkernel\\\":\\\"true\\\"}}\"}}",
"--data @ file .json",
"json_verify < file .json",
"curl --request GET --user My_User_Name : My_Password https:// satellite.example.com /api/users | python3 -m json.tool",
"{ \"page\": 1, \"per_page\": 20, \"results\": [ { \"admin\": false, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-21 08:59:22 UTC\", \"default_location\": null, \"default_organization\": null, \"description\": \"\", \"effective_admin\": false, \"firstname\": \"\", \"id\": 5, \"last_login_on\": \"2018-09-21 09:03:25 UTC\", \"lastname\": \"\", \"locale\": null, \"locations\": [], \"login\": \"test\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-09-21 09:04:45 UTC\" }, { \"admin\": true, \"auth_source_id\": 1, \"auth_source_name\": \"Internal\", \"created_at\": \"2018-09-20 07:09:41 UTC\", \"default_location\": null, \"default_organization\": { \"description\": null, \"id\": 1, \"name\": \"Default Organization\", \"title\": \"Default Organization\" }, \"description\": \"\", \"effective_admin\": true, \"firstname\": \"Admin\", \"id\": 4, \"last_login_on\": \"2018-12-07 07:31:09 UTC\", \"lastname\": \"User\", \"locale\": null, \"locations\": [ { \"id\": 2, \"name\": \"Default Location\" } ], \"login\": \"admin\", \"mail\": \"[email protected]\", \"organizations\": [ { \"id\": 1, \"name\": \"Default Organization\" } ], \"ssh_keys\": [], \"timezone\": null, \"updated_at\": \"2018-11-14 08:19:46 UTC\" } ], \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"subtotal\": 2, \"total\": 2 }",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user My_User_Name : My_Password --data \"{\\\"firstname\\\":\\\" Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users | python3 -m json.tool",
"curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user My_User_Name : My_Password --data \"{\\\"firstname\\\":\\\" New Test Name \\\",\\\"mail\\\":\\\" [email protected] \\\",\\\"login\\\":\\\" new_test_user \\\",\\\"password\\\":\\\" password123 \\\",\\\"auth_source_id\\\": 1 }\" https:// satellite.example.com /api/users/ 8 | python3 -m json.tool",
"#!/usr/bin/ruby require 'rest-client' require 'json' url = 'https://satellite.example.com/api/v2/' katello_url = \"#{url}/katello/api/v2/\" USDusername = 'admin' USDpassword = 'changeme' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Performs a GET by using the passed URL location def get_json(location) response = RestClient::Request.new( :method => :get, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json } ).execute JSON.parse(response.to_str) end Performs a POST and passes the data to the URL location def post_json(location, json_data) response = RestClient::Request.new( :method => :post, :url => location, :user => USDusername, :password => USDpassword, :headers => { :accept => :json, :content_type => :json}, :payload => json_data ).execute JSON.parse(response.to_str) end Creates a hash with ids mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = get_json(\"#{katello_url}/organizations\") org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = post_json(\"#{katello_url}/organizations\", JSON.generate({\"name\"=> org_name}))[\"id\"] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's lifecycle environments envs = get_json(\"#{katello_url}/organizations/#{org_id}/environments\") env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = post_json(\"#{katello_url}/organizations/#{org_id}/environments\", JSON.generate({\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id}))[\"id\"] end",
"#!/usr/bin/ruby require 'apipie-bindings' org_name = \"MyOrg\" environments = [ \"Development\", \"Testing\", \"Production\" ] Create an instance of apipie bindings @api = ApipieBindings::API.new({ :uri => 'https://satellite.example.com/', :username => 'admin', :password => 'changeme', :api_version => 2 }) Performs an API call with default options def call_api(resource_name, action_name, params = {}) http_headers = {} apipie_options = { :skip_validation => true } @api.resource(resource_name).call(action_name, params, http_headers, apipie_options) end Creates a hash with IDs mapping to names for an array of records def id_name_map(records) records.inject({}) do |map, record| map.update(record['id'] => record['name']) end end Get list of existing organizations orgs = call_api(:organizations, :index) org_list = id_name_map(orgs['results']) if !org_list.has_value?(org_name) # If our organization is not found, create it puts \"Creating organization: \\t#{org_name}\" org_id = call_api(:organizations, :create, {'organization' => { :name => org_name }})['id'] else # Our organization exists, so let's grab it org_id = org_list.key(org_name) puts \"Organization \\\"#{org_name}\\\" exists\" end Get list of organization's life cycle environments envs = call_api(:lifecycle_environments, :index, {'organization_id' => org_id}) env_list = id_name_map(envs['results']) prior_env_id = env_list.key(\"Library\") Exit the script if at least one life cycle environment already exists environments.each do |e| if env_list.has_value?(e) puts \"ERROR: One of the Environments is not unique to organization\" exit end end # Create life cycle environments environments.each do |environment| puts \"Creating environment: \\t#{environment}\" prior_env_id = call_api(:lifecycle_environments, :create, {\"name\" => environment, \"organization_id\" => org_id, \"prior_id\" => prior_env_id })['id'] end",
"#!/usr/bin/python3 import json import sys try: import requests except ImportError: print(\"Please install the python-requests module.\") sys.exit(-1) URL to your Satellite Server URL = \"https://satellite.example.com\" FOREMAN_API = f\"{URL}/api/\" KATELLO_API = f\"{URL}/katello/api/\" POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"changeme\" Ignore SSL for now SSL_VERIFY = False Name of the organization to be either created or used ORG_NAME = \"MyOrg\" Name for life cycle environments to be either created or used ENVIRONMENTS = [\"Development\", \"Testing\", \"Production\"] def get_json(location): \"\"\" Performs a GET by using the passed URL location \"\"\" r = requests.get(location, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def post_json(location, json_data): \"\"\" Performs a POST and passes the data to the URL location \"\"\" result = requests.post( location, data=json_data, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY, headers=POST_HEADERS ) return result.json() def main(): \"\"\" Main routine that creates or re-uses an organization and life cycle environments. If life cycle environments already exist, exit out. \"\"\" # Check if our organization already exists org = get_json(f\"{FOREMAN_API}/organizations/{ORG_NAME}\") # If our organization is not found, create it if org.get('error', None): org_id = post_json( f\"{FOREMAN_API}/organizations/\", json.dumps({\"name\": ORG_NAME}) )[\"id\"] print(\"Creating organization:\\t\" + ORG_NAME) else: # Our organization exists, so let's grab it org_id = org['id'] print(f\"Organization '{ORG_NAME}' exists.\") # Now, let's fetch all available life cycle environments for this org envs = get_json( f\"{KATELLO_API}/organizations/{org_id}/environments/\" ) # ...and add them to a dictionary, with respective 'Prior' environment prior_env_id = 0 env_list = {} for env in envs['results']: env_list[env['id']] = env['name'] prior_env_id = env['id'] if env['name'] == \"Library\" else prior_env_id # Exit the script if at least one life cycle environment already exists if all(environment in env_list.values() for environment in ENVIRONMENTS): print(\"ERROR: One of the Environments is not unique to organization\") sys.exit(-1) # Create life cycle environments for environment in ENVIRONMENTS: new_env_id = post_json( f\"{KATELLO_API}/organizations/{org_id}/environments/\", json.dumps({ \"name\": environment, \"organization_id\": org_id, \"prior\": prior_env_id }) )[\"id\"] print(\"Creating environment:\\t\" + environment) prior_env_id = new_env_id if __name__ == \"__main__\": main()",
"#!/usr/bin/env python3 import json import sys try: import requests except ImportError: print(\"Please install the python-requests module.\") sys.exit(-1) HOSTNAME = \"satellite.example.com\" URL for the API to your Satellite Server FOREMAN_API = f\"https://{HOSTNAME}/api/\" KATELLO_API = f\"https://{HOSTNAME}/katello/api/v2/\" POST_HEADERS = {'content-type': 'application/json'} Default credentials to login to Satellite 6 USERNAME = \"admin\" PASSWORD = \"password\" Ignore SSL for now SSL_VERIFY = False #SSL_VERIFY = \"./path/to/CA-certificate.crt\" # Put the path to your CA certificate here to allow SSL_VERIFY def get_json(url): # Performs a GET by using the passed URL location r = requests.get(url, auth=(USERNAME, PASSWORD), verify=SSL_VERIFY) return r.json() def get_results(url): jsn = get_json(url) if jsn.get('error'): print(\"Error: \" + jsn['error']['message']) else: if jsn.get('results'): return jsn['results'] elif 'results' not in jsn: return jsn else: print(\"No results found\") return None def display_all_results(url): results = get_results(url) if results: print(json.dumps(results, indent=4, sort_keys=True)) def display_info_for_hosts(url): hosts = get_results(url) if hosts: print(f\"{'ID':10}{'Name':40}{'IP':30}{'Operating System':30}\") for host in hosts: print(f\"{str(host['id']):10}{host['name']:40}{str(host['ip']):30}{str(host['operatingsystem_name']):30}\") def display_info_for_subs(url): subs = get_results(url) if subs: print(f\"{'ID':10}{'Name':90}{'Start Date':30}\") for sub in subs: print(f\"{str(sub['id']):10}{sub['name']:90}{str(sub['start_date']):30}\") def main(): host = HOSTNAME print(f\"Displaying all info for host {host} ...\") display_all_results(FOREMAN_API + 'hosts/' + host) print(f\"Displaying all facts for host {host} ...\") display_all_results(FOREMAN_API + f'hosts/{host}/facts') host_pattern = 'example' print(f\"Displaying basic info for hosts matching pattern '{host_pattern}'...\") display_info_for_hosts(FOREMAN_API + 'hosts?per_page=1&search=name~' + host_pattern) print(f\"Displaying basic info for subscriptions\") display_info_for_subs(KATELLO_API + 'subscriptions') environment = 'production' print(f\"Displaying basic info for hosts in environment {environment}...\") display_info_for_hosts(FOREMAN_API + 'hosts?search=environment=' + environment) if __name__ == \"__main__\": main()"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_satellite_rest_api/api-requests-in-various-languages |
14.7.10. Retrieving a Device's Configuration Settings | 14.7.10. Retrieving a Device's Configuration Settings The virsh nodedev-dumpxml [device] command dumps the XML configuration file for the given node <device> . The XML configuration includes information such as: the device name, which bus owns for example the device, the vendor, and product ID. The argument device can either be a device name or a WWN pair in WWNN | WWPN format (HBA only). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-numa_node_management-dump_a_device |
Chapter 8. Conditional policies in Red Hat Developer Hub | Chapter 8. Conditional policies in Red Hat Developer Hub The permission framework in Red Hat Developer Hub provides conditions, supported by the RBAC backend plugin ( backstage-plugin-rbac-backend ). The conditions work as content filters for the Developer Hub resources that are provided by the RBAC backend plugin. The RBAC backend API stores conditions assigned to roles in the database. When you request to access the frontend resources, the RBAC backend API searches for the corresponding conditions and delegates them to the appropriate plugin using its plugin ID. If you are assigned to multiple roles with different conditions, then the RBAC backend merges the conditions using the anyOf criteria. Conditional criteria A condition in Developer Hub is a simple condition with a rule and parameters. However, a condition can also contain a parameter or an array of parameters combined by conditional criteria. The supported conditional criteria includes: allOf : Ensures that all conditions within the array must be true for the combined condition to be satisfied. anyOf : Ensures that at least one of the conditions within the array must be true for the combined condition to be satisfied. not : Ensures that the condition within it must not be true for the combined condition to be satisfied. Conditional object The plugin specifies the parameters supported for conditions. You can access the conditional object schema from the RBAC API endpoint to understand how to construct a conditional JSON object, which is then used by the RBAC backend plugin API. A conditional object contains the following parameters: Table 8.1. Conditional object parameters Parameter Type Description result String Always has the value CONDITIONAL roleEntityRef String String entity reference to the RBAC role, such as role:default/dev pluginId String Corresponding plugin ID, such as catalog permissionMapping String array Array permission actions, such as ['read', 'update', 'delete'] resourceType String Resource type provided by the plugin, such as catalog-entity conditions JSON Condition JSON with parameters or array parameters joined by criteria Conditional policy aliases The RBAC backend plugin ( backstage-plugin-rbac-backend ) supports the use of aliases in conditional policy rule parameters. The conditional policy aliases are dynamically replaced with the corresponding values during policy evaluation. Each alias in conditional policy is prefixed with a USD sign indicating its special function. The supported conditional aliases include: USDcurrentUser : This alias is replaced with the user entity reference for the user who requests access to the resource. For example, if user Tom from the default namespace requests access, USDcurrentUser becomes user:default/tom . Example conditional policy object with USDcurrentUser alias { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["delete"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["USDcurrentUser"] } } } USDownerRefs : This alias is replaced with ownership references, usually as an array that includes the user entity reference and the user's parent group entity reference. For example, for user Tom from team-a, USDownerRefs becomes ['user:default/tom', 'group:default/team-a'] . Example conditional policy object with USDownerRefs alias { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["delete"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["USDownerRefs"] } } } 8.1. Conditional policies reference You can access API endpoints for conditional policies in Red Hat Developer Hub. For example, to retrieve the available conditional rules, which can help you define these policies, you can access the GET [api/plugins/condition-rules] endpoint. The api/plugins/condition-rules returns the condition parameters schemas, for example: [ { "pluginId": "catalog", "rules": [ { "name": "HAS_ANNOTATION", "description": "Allow entities with the specified annotation", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "annotation": { "type": "string", "description": "Name of the annotation to match on" }, "value": { "type": "string", "description": "Value of the annotation to match on" } }, "required": [ "annotation" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_LABEL", "description": "Allow entities with the specified label", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "label": { "type": "string", "description": "Name of the label to match on" } }, "required": [ "label" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_METADATA", "description": "Allow entities with the specified metadata subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities metadata to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "HAS_SPEC", "description": "Allow entities with the specified spec subfield", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "key": { "type": "string", "description": "Property within the entities spec to match on" }, "value": { "type": "string", "description": "Value of the given property to match on" } }, "required": [ "key" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_KIND", "description": "Allow entities matching a specified kind", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "kinds": { "type": "array", "items": { "type": "string" }, "description": "List of kinds to match at least one of" } }, "required": [ "kinds" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } }, { "name": "IS_ENTITY_OWNER", "description": "Allow entities owned by a specified claim", "resourceType": "catalog-entity", "paramsSchema": { "type": "object", "properties": { "claims": { "type": "array", "items": { "type": "string" }, "description": "List of claims to match at least one on within ownedBy" } }, "required": [ "claims" ], "additionalProperties": false, "USDschema": "http://json-schema.org/draft-07/schema#" } } ] } ... <another plugin condition parameter schemas> ] The RBAC backend API constructs a condition JSON object based on the condition schema. 8.1.1. Examples of conditional policies In Red Hat Developer Hub, you can define conditional policies with or without criteria. You can use the following examples to define the conditions based on your use case: A condition without criteria Consider a condition without criteria displaying catalogs only if user is a member of the owner group. To add this condition, you can use the catalog plugin schema IS_ENTITY_OWNER as follows: Example condition without criteria { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } In the example, the only conditional parameter used is claims , which contains a list of user or group entity references. You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } } } A condition with criteria Consider a condition with criteria, which displays catalogs only if user is a member of owner group OR displays list of all catalog user groups. To add the criteria, you can add another rule as IS_ENTITY_KIND in the condition as follows: Example condition with criteria { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } Note Running conditions in parallel during creation is not supported. Therefore, consider defining nested conditional policies based on the available criteria. Example of nested conditions { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ], "not": { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Api"] } } } You can apply the example condition to the RBAC REST API by adding additional parameters as follows: { "result": "CONDITIONAL", "roleEntityRef": "role:default/test", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["read"], "conditions": { "anyOf": [ { "rule": "IS_ENTITY_OWNER", "resourceType": "catalog-entity", "params": { "claims": ["group:default/team-a"] } }, { "rule": "IS_ENTITY_KIND", "resourceType": "catalog-entity", "params": { "kinds": ["Group"] } } ] } } The following examples can be used with Developer Hub plugins. These examples can help you determine how to define conditional policies: Conditional policy defined for Keycloak plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "catalog", "resourceType": "catalog-entity", "permissionMapping": ["update", "delete"], "conditions": { "not": { "rule": "HAS_ANNOTATION", "resourceType": "catalog-entity", "params": { "annotation": "keycloak.org/realm", "value": "<YOUR_REALM>" } } } } The example of Keycloak plugin prevents users in the role:default/developer from updating or deleting users that are ingested into the catalog from the Keycloak plugin. Note In the example, the annotation keycloak.org/realm requires the value of <YOUR_REALM> . Conditional policy defined for Quay plugin { "result": "CONDITIONAL", "roleEntityRef": "role:default/developer", "pluginId": "scaffolder", "resourceType": "scaffolder-action", "permissionMapping": ["use"], "conditions": { "not": { "rule": "HAS_ACTION_ID", "resourceType": "scaffolder-action", "params": { "actionId": "quay:create-repository" } } } } The example of Quay plugin prevents the role role:default/developer from using the Quay scaffolder action. Note that permissionMapping contains use , signifying that scaffolder-action resource type permission does not have a permission policy. For more information about permissions in Red Hat Developer Hub, see Chapter 7, Permission policies reference . | [
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDcurrentUser\"] } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"delete\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"USDownerRefs\"] } } }",
"[ { \"pluginId\": \"catalog\", \"rules\": [ { \"name\": \"HAS_ANNOTATION\", \"description\": \"Allow entities with the specified annotation\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"annotation\": { \"type\": \"string\", \"description\": \"Name of the annotation to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the annotation to match on\" } }, \"required\": [ \"annotation\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_LABEL\", \"description\": \"Allow entities with the specified label\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"label\": { \"type\": \"string\", \"description\": \"Name of the label to match on\" } }, \"required\": [ \"label\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_METADATA\", \"description\": \"Allow entities with the specified metadata subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities metadata to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"HAS_SPEC\", \"description\": \"Allow entities with the specified spec subfield\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"key\": { \"type\": \"string\", \"description\": \"Property within the entities spec to match on\" }, \"value\": { \"type\": \"string\", \"description\": \"Value of the given property to match on\" } }, \"required\": [ \"key\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_KIND\", \"description\": \"Allow entities matching a specified kind\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"kinds\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of kinds to match at least one of\" } }, \"required\": [ \"kinds\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } }, { \"name\": \"IS_ENTITY_OWNER\", \"description\": \"Allow entities owned by a specified claim\", \"resourceType\": \"catalog-entity\", \"paramsSchema\": { \"type\": \"object\", \"properties\": { \"claims\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"List of claims to match at least one on within ownedBy\" } }, \"required\": [ \"claims\" ], \"additionalProperties\": false, \"USDschema\": \"http://json-schema.org/draft-07/schema#\" } } ] } ... <another plugin condition parameter schemas> ]",
"{ \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } } }",
"{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] }",
"{ \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ], \"not\": { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Api\"] } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/test\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"read\"], \"conditions\": { \"anyOf\": [ { \"rule\": \"IS_ENTITY_OWNER\", \"resourceType\": \"catalog-entity\", \"params\": { \"claims\": [\"group:default/team-a\"] } }, { \"rule\": \"IS_ENTITY_KIND\", \"resourceType\": \"catalog-entity\", \"params\": { \"kinds\": [\"Group\"] } } ] } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"catalog\", \"resourceType\": \"catalog-entity\", \"permissionMapping\": [\"update\", \"delete\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ANNOTATION\", \"resourceType\": \"catalog-entity\", \"params\": { \"annotation\": \"keycloak.org/realm\", \"value\": \"<YOUR_REALM>\" } } } }",
"{ \"result\": \"CONDITIONAL\", \"roleEntityRef\": \"role:default/developer\", \"pluginId\": \"scaffolder\", \"resourceType\": \"scaffolder-action\", \"permissionMapping\": [\"use\"], \"conditions\": { \"not\": { \"rule\": \"HAS_ACTION_ID\", \"resourceType\": \"scaffolder-action\", \"params\": { \"actionId\": \"quay:create-repository\" } } } }"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/authorization/con-rbac-conditional-policies-rhdh_title-authorization |
Chapter 3. Updating the Undercloud | Chapter 3. Updating the Undercloud This process updates the undercloud and its overcloud images to the latest Red Hat OpenStack Platform 16.0 version. 3.1. Performing a minor update of a containerized undercloud The director provides commands to update the packages on the undercloud node. This allows you to perform a minor update within the current version of your OpenStack Platform environment. Procedure Log into the director as the stack user. Run dnf to upgrade the director's main packages: The director uses the openstack undercloud upgrade command to update the undercloud environment. Run the command: Wait until the undercloud upgrade process completes. Reboot the undercloud to update the operating system's kernel and other system packages: Wait until the node boots. 3.2. Updating the overcloud images You need to replace your current overcloud images with new versions. The new images ensure the director can introspect and provision your nodes using the latest version of OpenStack Platform software. Prerequisites You have updated the undercloud to the latest version. Procedure Source the stackrc file: Remove any existing images from the images directory on the stack user's home ( /home/stack/images ): Extract the archives: Import the latest images into the director: Configure your nodes to use the new images: Verify the existence of the new images: Important When deploying overcloud nodes, ensure the overcloud image version corresponds to the respective heat template version. For example, only use the OpenStack Platform 16 images with the OpenStack Platform 16 heat templates. Important The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future. 3.3. Undercloud Post-Upgrade Notes If using a local set of core templates in your stack users home directory, ensure you update the templates using the recommended workflow in Using Customized Core Heat Templates in the Advanced Overcloud Customization guide. You must update the local copy before upgrading the overcloud. 3.4. Steps The undercloud upgrade is complete. You can now update the overcloud. | [
"sudo dnf update -y python3-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templates tripleo-ansible",
"openstack undercloud upgrade",
"sudo reboot",
"source ~/stackrc",
"rm -rf ~/images/*",
"cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.0.tar; do tar -xvf USDi; done cd ~",
"openstack overcloud image upload --update-existing --image-path /home/stack/images/",
"openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value)",
"openstack image list ls -l /var/lib/ironic/httpboot"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/keeping_red_hat_openstack_platform_updated/assembly-updating_the_undercloud |
Chapter 4. Networking Operators overview | Chapter 4. Networking Operators overview OpenShift Container Platform supports multiple types of networking Operators. You can manage the cluster networking using these networking Operators. 4.1. Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform . 4.2. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. For more information, see DNS Operator in OpenShift Container Platform . 4.3. Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to external clients. The Ingress Operator implements the Ingress Controller API and is responsible for enabling external access to OpenShift Container Platform cluster services. For more information, see Ingress Operator in OpenShift Container Platform . 4.4. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. For more information, see Understanding the External DNS Operator . 4.5. Ingress Node Firewall Operator The Ingress Node Firewall Operator uses an extended Berkley Packet Filter (eBPF) and eXpress Data Path (XDP) plugin to process node firewall rules, update statistics and generate events for dropped traffic. The operator manages ingress node firewall resources, verifies firewall configuration, does not allow incorrectly configured rules that can prevent cluster access, and loads ingress node firewall XDP programs to the selected interfaces in the rule's object(s). For more information, see Understanding the Ingress Node Firewall Operator 4.6. Network Observability Operator The Network Observability Operator is an optional Operator that allows cluster administrators to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information and stored in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see About Network Observability Operator . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/networking-operators-overview |
Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform | Deploying and managing OpenShift Data Foundation using Red Hat OpenStack Platform Red Hat OpenShift Data Foundation 4.14 Instructions on deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). Important Deploying and managing OpenShift Data Foundation on Red Hat OpenStack Platform is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/index |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 4.2-0 Fri Jun 09 2023 Lenka Spackova Improved Section 2.10.1, "Example of the Converted Spec File" and Section 2.10.2, "Converting Tags and Macro Definitions" . Revision 4.1-9 Tue May 23 2023 Lenka Spackova Updated references with the release of Red Hat Developer Toolset 12.1. Revision 4.1-8 Mon Nov 15 2021 Lenka Spackova Red Hat Software Collections 3.8 release of the Packaging Guide. Revision 4.1-7 Mon Oct 11 2021 Lenka Spackova Red Hat Software Collections 3.8 Beta release of the Packaging Guide. Revision 4.1-6 Thu Jun 03 2021 Lenka Spackova Red Hat Software Collections 3.7 release of the Packaging Guide. Revision 4.1-5 Mon May 03 2021 Lenka Spackova Red Hat Software Collections 3.7 Beta release of the Packaging Guide. Revision 4.1-4 Tue Dec 01 2020 Lenka Spackova Red Hat Software Collections 3.6 release of the Packaging Guide. Revision 4.1-3 Tue Oct 29 2020 Lenka Spackova Red Hat Software Collections 3.6 Beta release of the Packaging Guide. Revision 4.1-2 Tue May 26 2020 Lenka Spackova Red Hat Software Collections 3.5 release of the Packaging Guide. Revision 4.1-1 Tue Apr 21 2020 Lenka Spackova Red Hat Software Collections 3.5 Beta release of the Packaging Guide. Revision 4.1-0 Tue Dec 10 2019 Lenka Spackova Red Hat Software Collections 3.4 release of the Packaging Guide. Revision 4.0-9 Mon Oct 28 2019 Lenka Spackova Red Hat Software Collections 3.4 Beta release of the Packaging Guide. Revision 4.0-8 Tue Jun 04 2019 Petr Kovar Red Hat Software Collections 3.3 release of the Packaging Guide. Revision 4.0-7 Wed Apr 10 2019 Petr Kovar Red Hat Software Collections 3.3 Beta release of the Packaging Guide. Revision 4.0-6 Thu Nov 01 2018 Petr Kovar Red Hat Software Collections 3.2 release of the Packaging Guide. Revision 4.0-5 Wed Oct 17 2018 Petr Kovar Red Hat Software Collections 3.2 Beta release of the Packaging Guide. Revision 4.0-4 Thu Apr 12 2018 Petr Kovar Red Hat Software Collections 3.1 release of the Packaging Guide. Revision 4.0-3 Fri Mar 16 2018 Petr Kovar Red Hat Software Collections 3.1 Beta release of the Packaging Guide. Revision 4.0-2 Tue Oct 17 2017 Petr Kovar Red Hat Software Collections 3.0 release of the Packaging Guide. Revision 4.0-1 Thu Aug 31 2017 Petr Kovar Red Hat Software Collections 3.0 Beta release of the Packaging Guide. Revision 3.10-0 Mon Jun 5 2017 Petr Kovar Republished to fix BZ#1458821. Revision 3.9-0 Thu Apr 20 2017 Petr Kovar Red Hat Software Collections 2.4 release of the Packaging Guide. Revision 3.8-0 Wed Apr 05 2017 Petr Kovar Red Hat Software Collections 2.4 Beta release of the Packaging Guide. Revision 3.7-0 Wed Jan 25 2017 Petr Kovar Republished to fix BZ#1263733. Revision 3.6-0 Wed Nov 02 2016 Petr Kovar Red Hat Software Collections 2.3 release of the Packaging Guide. Revision 3.5-0 Wed Oct 12 2016 Petr Kovar Red Hat Software Collections 2.3 Beta release of the Packaging Guide. Revision 3.4-0 Mon May 23 2016 Petr Kovar Red Hat Software Collections 2.2 release of the Packaging Guide. Revision 3.3-0 Tue Apr 26 2016 Petr Kovar Red Hat Software Collections 2.2 Beta release of the Packaging Guide. Revision 3.2-0 Wed Nov 04 2015 Petr Kovar Red Hat Software Collections 2.1 release of the Packaging Guide. Revision 3.1-0 Tue Oct 06 2015 Petr Kovar Red Hat Software Collections 2.1 Beta release of the Packaging Guide. Revision 3.0-2 Tue May 19 2015 Petr Kovar Red Hat Software Collections 2.0 release of the Packaging Guide. Revision 3.0-1 Wed Apr 22 2015 Petr Kovar Red Hat Software Collections 2.0 Beta release of the Packaging Guide. Revision 2.2-4 Fri Nov 21 2014 Petr Kovar Republished to fix BZ#1150573, BZ#1022023, and BZ#1149650. Revision 2.2-2 Thu Oct 30 2014 Petr Kovar Red Hat Software Collections 1.2 release of the Packaging Guide. Revision 2.2-1 Tue Oct 07 2014 Petr Kovar Red Hat Software Collections 1.2 Beta refresh release of the Packaging Guide. Revision 2.2-0 Tue Sep 09 2014 Petr Kovar The Software Collections Guide renamed to Packaging Guide. Red Hat Software Collections 1.2 Beta release of the Packaging Guide. Revision 2.1-29 Wed Jun 04 2014 Petr Kovar Red Hat Software Collections 1.1 release of the Software Collections Guide. Revision 2.1-21 Thu Mar 20 2014 Petr Kovar Red Hat Software Collections 1.1 Beta release of the Software Collections Guide. Revision 2.1-18 Tue Mar 11 2014 Petr Kovar Red Hat Developer Toolset 2.1 release of the Software Collections Guide. Revision 2.1-8 Tue Feb 11 2014 Petr Kovar Red Hat Developer Toolset 2.1 Beta release of the Software Collections Guide. Revision 2.0-12 Tue Sep 10 2013 Petr Kovar Red Hat Developer Toolset 2.0 release of the Software Collections Guide. Revision 2.0-8 Tue Aug 06 2013 Petr Kovar Red Hat Developer Toolset 2.0 Beta-2 release of the Software Collections Guide. Revision 2.0-3 Tue May 28 2013 Petr Kovar Red Hat Developer Toolset 2.0 Beta-1 release of the Software Collections Guide. Revision 1.0-2 Tue Apr 23 2013 Petr Kovar Republished to fix BZ#949000. Revision 1.0-1 Tue Jan 22 2013 Petr Kovar Red Hat Developer Toolset 1.1 release of the Software Collections Guide. Revision 1.0-2 Thu Nov 08 2012 Petr Kovar Red Hat Developer Toolset 1.1 Beta-2 release of the Software Collections Guide. Revision 1.0-1 Wed Oct 10 2012 Petr Kovar Red Hat Developer Toolset 1.1 Beta-1 release of the Software Collections Guide. Revision 1.0-0 Tue Jun 26 2012 Petr Kovar Red Hat Developer Toolset 1.0 release of the Software Collections Guide. Revision 0.0-2 Tue Apr 10 2012 Petr Kovar Red Hat Developer Toolset 1.0 Alpha-2 release of the Software Collections Guide. Revision 0.0-1 Tue Mar 06 2012 Petr Kovar Red Hat Developer Toolset 1.0 Alpha-1 release of the Software Collections Guide. B.1. Acknowledgments The author of this book would like to thank the following people for their valuable contributions: Jindrich Novy, Marcela Maslanova, Bohuslav Kabrda, Honza Horak, Jan Zeleny, Martin Cermak, Jitka Plesnikova, Langdon White, Florian Nadge, Stephen Wadeley, Douglas Silas, Tomas Capek, and Vit Ondruch, among many others. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/appe-software_collections_guide-revision_history |
Chapter 22. OpenLMI | Chapter 22. OpenLMI The Open Linux Management Infrastructure , commonly abbreviated as OpenLMI , is a common infrastructure for the management of Linux systems. It builds on top of existing tools and serves as an abstraction layer in order to hide much of the complexity of the underlying system from system administrators. OpenLMI is distributed with a set of services that can be accessed locally or remotely and provides multiple language bindings, standard APIs, and standard scripting interfaces that can be used to manage and monitor hardware, operating systems, and system services. 22.1. About OpenLMI OpenLMI is designed to provide a common management interface to production servers running the Red Hat Enterprise Linux system on both physical and virtual machines. It consists of the following three components: System management agents - these agents are installed on a managed system and implement an object model that is presented to a standard object broker. The initial agents implemented in OpenLMI include storage configuration and network configuration, but later work will address additional elements of system management. The system management agents are commonly referred to as Common Information Model providers or CIM providers . A standard object broker - the object broker manages system management agents and provides an interface to them. The standard object broker is also known as a CIM Object Monitor or CIMOM . Client applications and scripts - the client applications and scripts call the system management agents through the standard object broker. The OpenLMI project complements existing management initiatives by providing a low-level interface that can be used by scripts or system management consoles. Interfaces distributed with OpenLMI include C, C++, Python, Java, and an interactive command line client, and all of them offer the same full access to the capabilities implemented in each agent. This ensures that you always have access to exactly the same capabilities no matter which programming interface you decide to use. 22.1.1. Main Features The following are key benefits of installing and using OpenLMI on your system: OpenLMI provides a standard interface for configuration, management, and monitoring of your local and remote systems. It allows you to configure, manage, and monitor production servers running on both physical and virtual machines. It is distributed with a collection of CIM providers that allow you to configure, manage, and monitor storage devices and complex networks. It allows you to call system management functions from C, C++, Python, and Java programs, and includes LMIShell, which provides a command line interface. It is free software based on open industry standards. 22.1.2. Management Capabilities Key capabilities of OpenLMI include the management of storage devices, networks, system services, user accounts, hardware and software configuration, power management, and interaction with Active Directory. For a complete list of CIM providers that are distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Table 22.1. Available CIM Providers Package Name Description openlmi-account A CIM provider for managing user accounts. openlmi-logicalfile A CIM provider for reading files and directories. openlmi-networking A CIM provider for network management. openlmi-powermanagement A CIM provider for power management. openlmi-service A CIM provider for managing system services. openlmi-storage A CIM provider for storage management. openlmi-fan A CIM provider for controlling computer fans. openlmi-hardware A CIM provider for retrieving hardware information. openlmi-realmd A CIM provider for configuring realmd. openlmi-software [a] A CIM provider for software management. [a] In Red Hat Enterprise Linux 7, the OpenLMI Software provider is included as a Technology Preview . This provider is fully functional, but has a known performance scaling issue where listing large numbers of software packages may consume excessive amount of memory and time. To work around this issue, adjust package searches to return as few packages as possible. 22.2. Installing OpenLMI OpenLMI is distributed as a collection of RPM packages that include the CIMOM, individual CIM providers, and client applications. This allows you distinguish between a managed and client system and install only those components you need. 22.2.1. Installing OpenLMI on a Managed System A managed system is the system you intend to monitor and manage by using the OpenLMI client tools. To install OpenLMI on a managed system, complete the following steps: Install the tog-pegasus package by typing the following at a shell prompt as root : This command installs the OpenPegasus CIMOM and all its dependencies to the system and creates a user account for the pegasus user. Install required CIM providers by running the following command as root : This command installs the CIM providers for storage, network, service, account, and power management. For a complete list of CIM providers distributed with Red Hat Enterprise Linux 7, see Table 22.1, "Available CIM Providers" . Edit the /etc/Pegasus/access.conf configuration file to customize the list of users that are allowed to connect to the OpenPegasus CIMOM. By default, only the pegasus user is allowed to access the CIMOM both remotely and locally. To activate this user account, run the following command as root to set the user's password: Start the OpenPegasus CIMOM by activating the tog-pegasus.service unit. To activate the tog-pegasus.service unit in the current session, type the following at a shell prompt as root : To configure the tog-pegasus.service unit to start automatically at boot time, type as root : If you intend to interact with the managed system from a remote machine, enable TCP communication on port 5989 ( wbem-https ). To open this port in the current session, run the following command as root : To open port 5989 for TCP communication permanently, type as root : You can now connect to the managed system and interact with it by using the OpenLMI client tools as described in Section 22.4, "Using LMIShell" . If you intend to perform OpenLMI operations directly on the managed system, also complete the steps described in Section 22.2.2, "Installing OpenLMI on a Client System" . 22.2.2. Installing OpenLMI on a Client System A client system is the system from which you intend to interact with the managed system. In a typical scenario, the client system and the managed system are installed on two separate machines, but you can also install the client tools on the managed system and interact with it directly. To install OpenLMI on a client system, complete the following steps: Install the openlmi-tools package by typing the following at a shell prompt as root : This command installs LMIShell, an interactive client and interpreter for accessing CIM objects provided by OpenPegasus, and all its dependencies to the system. Configure SSL certificates for OpenPegasus as described in Section 22.3, "Configuring SSL Certificates for OpenPegasus" . You can now use the LMIShell client to interact with the managed system as described in Section 22.4, "Using LMIShell" . 22.3. Configuring SSL Certificates for OpenPegasus OpenLMI uses the Web-Based Enterprise Management (WBEM) protocol that functions over an HTTP transport layer. Standard HTTP Basic authentication is performed in this protocol, which means that the user name and password are transmitted alongside the requests. Configuring the OpenPegasus CIMOM to use HTTPS for communication is necessary to ensure secure authentication. A Secure Sockets Layer (SSL) or Transport Layer Security (TLS) certificate is required on the managed system to establish an encrypted channel. There are two ways of managing SSL/TLS certificates on a system: Self-signed certificates require less infrastructure to use, but are more difficult to deploy to clients and manage securely. Authority-signed certificates are easier to deploy to clients once they are set up, but may require a greater initial investment. When using an authority-signed certificate, it is necessary to configure a trusted certificate authority on the client systems. The authority can then be used for signing all of the managed systems' CIMOM certificates. Certificates can also be part of a certificate chain, so the certificate used for signing the managed systems' certificates may in turn be signed by another, higher authority (such as Verisign, CAcert, RSA and many others). The default certificate and trust store locations on the file system are listed in Table 22.2, "Certificate and Trust Store Locations" . Table 22.2. Certificate and Trust Store Locations Configuration Option Location Description sslCertificateFilePath /etc/Pegasus/server.pem Public certificate of the CIMOM. sslKeyFilePath /etc/Pegasus/file.pem Private key known only to the CIMOM. sslTrustStore /etc/Pegasus/client.pem The file or directory providing the list of trusted certificate authorities. Important If you modify any of the files mentioned in Table 22.2, "Certificate and Trust Store Locations" , restart the tog-pegasus service to make sure it recognizes the new certificates. To restart the service, type the following at a shell prompt as root : For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 22.3.1. Managing Self-signed Certificates A self-signed certificate uses its own private key to sign itself and it is not connected to any chain of trust. On a managed system, if certificates have not been provided by the administrator prior to the first time that the tog-pegasus service is started, a set of self-signed certificates will be automatically generated using the system's primary host name as the certificate subject. Important The automatically generated self-signed certificates are valid by default for 10 years, but they have no automatic-renewal capability. Any modification to these certificates will require manually creating new certificates following guidelines provided by the OpenSSL or Mozilla NSS documentation on the subject. To configure client systems to trust the self-signed certificate, complete the following steps: Copy the /etc/Pegasus/server.pem certificate from the managed system to the /etc/pki/ca-trust/source/anchors/ directory on the client system. To do so, type the following at a shell prompt as root : Replace hostname with the host name of the managed system. Note that this command only works if the sshd service is running on the managed system and is configured to allow the root user to log in to the system over the SSH protocol. For more information on how to install and configure the sshd service and use the scp command to transfer files over the SSH protocol, see Chapter 12, OpenSSH . Verify the integrity of the certificate on the client system by comparing its check sum with the check sum of the original file. To calculate the check sum of the /etc/Pegasus/server.pem file on the managed system, run the following command as root on that system: To calculate the check sum of the /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem file on the client system, run the following command on this system: Replace hostname with the host name of the managed system. Update the trust store on the client system by running the following command as root : 22.3.2. Managing Authority-signed Certificates with Identity Management (Recommended) The Identity Management feature of Red Hat Enterprise Linux provides a domain controller which simplifies the management of SSL certificates within systems joined to the domain. Among others, the Identity Management server provides an embedded Certificate Authority. See the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide or the FreeIPA documentation for information on how to join the client and managed systems to the domain. It is necessary to register the managed system to Identity Management; for client systems the registration is optional. The following steps are required on the managed system: Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : Register Pegasus as a service in the Identity Management domain by running the following command as a privileged domain user: Replace hostname with the host name of the managed system. This command can be run from any system in the Identity Management domain that has the ipa-admintools package installed. It creates a service entry in Identity Management that can be used to generate signed SSL certificates. Back up the PEM files located in the /etc/Pegasus/ directory (recommended). Retrieve the signed certificate by running the following command as root : Replace hostname with the host name of the managed system. The certificate and key files are now kept in proper locations. The certmonger daemon installed on the managed system by the ipa-client-install script ensures that the certificate is kept up-to-date and renewed as necessary. For more information, see the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . To register the client system and update the trust store, follow the steps below. Install the ipa-client package and register the system to Identity Management as described in the Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide . Copy the Identity Management signing certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : If the client system is not meant to be registered in Identity Management, complete the following steps to update the trust store. Copy the /etc/ipa/ca.crt file securely from any other system joined to the same Identity Management domain to the trusted store /etc/pki/ca-trust/source/anchors/ directory as root . Update the trust store by running the following command as root : 22.3.3. Managing Authority-signed Certificates Manually Managing authority-signed certificates with other mechanisms than Identity Management requires more manual configuration. It is necessary to ensure that all of the clients trust the certificate of the authority that will be signing the managed system certificates: If a certificate authority is trusted by default, it is not necessary to perform any particular steps to accomplish this. If the certificate authority is not trusted by default, the certificate has to be imported on the client and managed systems. Copy the certificate to the trusted store by typing the following command as root : Update the trust store by running the following command as root : On the managed system, complete the following steps: Create a new SSL configuration file /etc/Pegasus/ssl.cnf to store information about the certificate. The contents of this file must be similar to the following example: Replace hostname with the fully qualified domain name of the managed system. Generate a private key on the managed system by using the following command as root : Generate a certificate signing request (CSR) by running this command as root : Send the /etc/Pegasus/server.csr file to the certificate authority for signing. The detailed procedure of submitting the file depends on the particular certificate authority. When the signed certificate is received from the certificate authority, save it as /etc/Pegasus/server.pem . Copy the certificate of the trusted authority to the Pegasus trust store to make sure that Pegasus is capable of trusting its own certificate by running as root : After accomplishing all the described steps, the clients that trust the signing authority are able to successfully communicate with the managed server's CIMOM. Important Unlike the Identity Management solution, if the certificate expires and needs to be renewed, all of the described manual steps have to be carried out again. It is recommended to renew the certificates before they expire. 22.4. Using LMIShell LMIShell is an interactive client and non-interactive interpreter that can be used to access CIM objects provided by the OpenPegasus CIMOM. It is based on the Python interpreter, but also implements additional functions and classes for interacting with CIM objects. 22.4.1. Starting, Using, and Exiting LMIShell Similarly to the Python interpreter, you can use LMIShell either as an interactive client, or as a non-interactive interpreter for LMIShell scripts. Starting LMIShell in Interactive Mode To start the LMIShell interpreter in interactive mode, run the lmishell command with no additional arguments: By default, when LMIShell attempts to establish a connection with a CIMOM, it validates the server-side certificate against the Certification Authorities trust store. To disable this validation, run the lmishell command with the --noverify or -n command line option: Using Tab Completion When running in interactive mode, the LMIShell interpreter allows you press the Tab key to complete basic programming structures and CIM objects, including namespaces, classes, methods, and object properties. Browsing History By default, LMIShell stores all commands you type at the interactive prompt in the ~/.lmishell_history file. This allows you to browse the command history and re-use already entered lines in interactive mode without the need to type them at the prompt again. To move backward in the command history, press the Up Arrow key or the Ctrl + p key combination. To move forward in the command history, press the Down Arrow key or the Ctrl + n key combination. LMIShell also supports an incremental reverse search. To look for a particular line in the command history, press Ctrl + r and start typing any part of the command. For example: To clear the command history, use the clear_history() function as follows: You can configure the number of lines that are stored in the command history by changing the value of the history_length option in the ~/.lmishellrc configuration file. In addition, you can change the location of the history file by changing the value of the history_file option in this configuration file. For example, to set the location of the history file to ~/.lmishell_history and configure LMIShell to store the maximum of 1000 lines in it, add the following lines to the ~/.lmishellrc file: Handling Exceptions By default, the LMIShell interpreter handles all exceptions and uses return values. To disable this behavior in order to handle all exceptions in the code, use the use_exceptions() function as follows: To re-enable the automatic exception handling, use: You can permanently disable the exception handling by changing the value of the use_exceptions option in the ~/.lmishellrc configuration file to True : Configuring a Temporary Cache With the default configuration, LMIShell connection objects use a temporary cache for storing CIM class names and CIM classes in order to reduce network communication. To clear this temporary cache, use the clear_cache() method as follows: Replace object_name with the name of a connection object. To disable the temporary cache for a particular connection object, use the use_cache() method as follows: To enable it again, use: You can permanently disable the temporary cache for connection objects by changing the value of the use_cache option in the ~/.lmishellrc configuration file to False : Exiting LMIShell To terminate the LMIShell interpreter and return to the shell prompt, press the Ctrl + d key combination or issue the quit() function as follows: Running an LMIShell Script To run an LMIShell script, run the lmishell command as follows: Replace file_name with the name of the script. To inspect an LMIShell script after its execution, also specify the --interact or -i command line option: The preferred file extension of LMIShell scripts is .lmi . 22.4.2. Connecting to a CIMOM LMIShell allows you to connect to a CIMOM that is running either locally on the same system, or on a remote machine accessible over the network. Connecting to a Remote CIMOM To access CIM objects provided by a remote CIMOM, create a connection object by using the connect() function as follows: Replace host_name with the host name of the managed system, user_name with the name of a user that is allowed to connect to the OpenPegasus CIMOM running on that system, and password with the user's password. If the password is omitted, LMIShell prompts the user to enter it. The function returns an LMIConnection object. Example 22.1. Connecting to a Remote CIMOM To connect to the OpenPegasus CIMOM running on server.example.com as user pegasus , type the following at the interactive prompt: Connecting to a Local CIMOM LMIShell allows you to connect to a local CIMOM by using a Unix socket. For this type of connection, you must run the LMIShell interpreter as the root user and the /var/run/tog-pegasus/cimxml.socket socket must exist. To access CIM objects provided by a local CIMOM, create a connection object by using the connect() function as follows: Replace host_name with localhost , 127.0.0.1 , or ::1 . The function returns an LMIConnection object or None . Example 22.2. Connecting to a Local CIMOM To connect to the OpenPegasus CIMOM running on localhost as the root user, type the following at the interactive prompt: Verifying a Connection to a CIMOM The connect() function returns either an LMIConnection object, or None if the connection could not be established. In addition, when the connect() function fails to establish a connection, it prints an error message to standard error output. To verify that a connection to a CIMOM has been established successfully, use the isinstance() function as follows: Replace object_name with the name of the connection object. This function returns True if object_name is an LMIConnection object, or False otherwise. Example 22.3. Verifying a Connection to a CIMOM To verify that the c variable created in Example 22.1, "Connecting to a Remote CIMOM" contains an LMIConnection object, type the following at the interactive prompt: Alternatively, you can verify that c is not None : 22.4.3. Working with Namespaces LMIShell namespaces provide a natural means of organizing available classes and serve as a hierarchic access point to other namespaces and classes. The root namespace is the first entry point of a connection object. Listing Available Namespaces To list all available namespaces, use the print_namespaces() method as follows: Replace object_name with the name of the object to inspect. This method prints available namespaces to standard output. To get a list of available namespaces, access the object attribute namespaces : This returns a list of strings. Example 22.4. Listing Available Namespaces To inspect the root namespace object of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all available namespaces, type the following at the interactive prompt: To assign a list of these namespaces to a variable named root_namespaces , type: Accessing Namespace Objects To access a particular namespace object, use the following syntax: Replace object_name with the name of the object to inspect and namespace_name with the name of the namespace to access. This returns an LMINamespace object. Example 22.5. Accessing Namespace Objects To access the cimv2 namespace of the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and assign it to a variable named ns , type the following at the interactive prompt: 22.4.4. Working with Classes LMIShell classes represent classes provided by a CIMOM. You can access and list their properties, methods, instances, instance names, and ValueMap properties, print their documentation strings, and create new instances and instance names. Listing Available Classes To list all available classes in a particular namespace, use the print_classes() method as follows: Replace namespace_object with the namespace object to inspect. This method prints available classes to standard output. To get a list of available classes, use the classes() method: This method returns a list of strings. Example 22.6. Listing Available Classes To inspect the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and list all available classes, type the following at the interactive prompt: To assign a list of these classes to a variable named cimv2_classes , type: Accessing Class Objects To access a particular class object that is provided by the CIMOM, use the following syntax: Replace namespace_object with the name of the namespace object to inspect and class_name with the name of the class to access. Example 22.7. Accessing Class Objects To access the LMI_IPNetworkConnection class of the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named cls , type the following at the interactive prompt: Examining Class Objects All class objects store information about their name and the namespace they belong to, as well as detailed class documentation. To get the name of a particular class object, use the following syntax: Replace class_object with the name of the class object to inspect. This returns a string representation of the object name. To get information about the namespace a class object belongs to, use: This returns a string representation of the namespace. To display detailed class documentation, use the doc() method as follows: Example 22.8. Examining Class Objects To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and display its name and corresponding namespace, type the following at the interactive prompt: To access class documentation, type: Listing Available Methods To list all available methods of a particular class object, use the print_methods() method as follows: Replace class_object with the name of the class object to inspect. This method prints available methods to standard output. To get a list of available methods, use the methods() method: This method returns a list of strings. Example 22.9. Listing Available Methods To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named service_methods , type: Listing Available Properties To list all available properties of a particular class object, use the print_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.10. Listing Available Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available properties, type the following at the interactive prompt: To assign a list of these classes to a variable named service_properties , type: Listing and Viewing ValueMap Properties CIM classes may contain ValueMap properties in their Managed Object Format ( MOF ) definition. ValueMap properties contain constant values, which may be useful when calling methods or checking returned values. To list all available ValueMap properties of a particular class object, use the print_valuemap_properties() method as follows: Replace class_object with the name of the class object to inspect. This method prints available ValueMap properties to standard output: To get a list of available ValueMap properties, use the valuemap_properties() method: This method returns a list of strings. Example 22.11. Listing ValueMap Properties To inspect the cls class object created in Example 22.7, "Accessing Class Objects" and list all available ValueMap properties, type the following at the interactive prompt: To assign a list of these ValueMap properties to a variable named service_valuemap_properties , type: To access a particular ValueMap property, use the following syntax: Replace valuemap_property with the name of the ValueMap property to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.12. Accessing ValueMap Properties Example 22.11, "Listing ValueMap Properties" mentions a ValueMap property named RequestedState . To inspect this property and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named requested_state_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.13. Accessing Constant Values Example 22.12, "Accessing ValueMap Properties" shows that the RequestedState property provides a constant value named Reset . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Fetching a CIMClass Object Many class methods do not require access to a CIMClass object, which is why LMIShell only fetches this object from the CIMOM when a called method actually needs it. To fetch the CIMClass object manually, use the fetch() method as follows: Replace class_object with the name of the class object. Note that methods that require access to a CIMClass object fetch it automatically. 22.4.5. Working with Instances LMIShell instances represent instances provided by a CIMOM. You can get and set their properties, list and call their methods, print their documentation strings, get a list of associated or association objects, push modified objects to the CIMOM, and delete individual instances from the CIMOM. Accessing Instances To get a list of all available instances of a particular class object, use the instances() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstance objects. To access the first instance of a class object, use the first_instance() method: This method returns an LMIInstance object. In addition to listing all instances or returning the first one, both instances() and first_instance() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent instance properties and values represent required values of these properties. Example 22.14. Accessing Instances To find the first instance of the cls class object created in Example 22.7, "Accessing Class Objects" that has the ElementName property equal to eth0 and assign it to a variable named device , type the following at the interactive prompt: Examining Instances All instance objects store information about their class name and the namespace they belong to, as well as detailed documentation about their properties and values. In addition, instance objects allow you to retrieve a unique identification object. To get the class name of a particular instance object, use the following syntax: Replace instance_object with the name of the instance object to inspect. This returns a string representation of the class name. To get information about the namespace an instance object belongs to, use: This returns a string representation of the namespace. To retrieve a unique identification object for an instance object, use: This returns an LMIInstanceName object. Finally, to display detailed documentation, use the doc() method as follows: Example 22.15. Examining Instances To inspect the device instance object created in Example 22.14, "Accessing Instances" and display its class name and the corresponding namespace, type the following at the interactive prompt: To access instance object documentation, type: Creating New Instances Certain CIM providers allow you to create new instances of specific classes objects. To create a new instance of a class object, use the create_instance() method as follows: Replace class_object with the name of the class object and properties with a dictionary that consists of key-value pairs, where keys represent instance properties and values represent property values. This method returns an LMIInstance object. Example 22.16. Creating New Instances The LMI_Group class represents system groups and the LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create instances of these two classes for the system group named pegasus and the user named lmishell-user , and assign them to variables named group and user , type the following at the interactive prompt: To get an instance of the LMI_Identity class for the lmishell-user user, type: The LMI_MemberOfGroup class represents system group membership. To use the LMI_MemberOfGroup class to add the lmishell-user to the pegasus group, create a new instance of this class as follows: Deleting Individual Instances To delete a particular instance from the CIMOM, use the delete() method as follows: Replace instance_object with the name of the instance object to delete. This method returns a boolean. Note that after deleting an instance, its properties and methods become inaccessible. Example 22.17. Deleting Individual Instances The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_Account class for the user named lmishell-user , and assign it to a variable named user , type the following at the interactive prompt: To delete this instance and remove the lmishell-user from the system, type: Listing and Accessing Available Properties To list all available properties of a particular instance object, use the print_properties() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available properties to standard output. To get a list of available properties, use the properties() method: This method returns a list of strings. Example 22.18. Listing Available Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available properties, type the following at the interactive prompt: To assign a list of these properties to a variable named device_properties , type: To get the current value of a particular property, use the following syntax: Replace property_name with the name of the property to access. To modify the value of a particular property, assign a value to it as follows: Replace value with the new value of the property. Note that in order to propagate the change to the CIMOM, you must also execute the push() method: This method returns a three-item tuple consisting of a return value, return value parameters, and an error string. Example 22.19. Accessing Individual Properties To inspect the device instance object created in Example 22.14, "Accessing Instances" and display the value of the property named SystemName , type the following at the interactive prompt: Listing and Using Available Methods To list all available methods of a particular instance object, use the print_methods() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints available methods to standard output. To get a list of available methods, use the method() method: This method returns a list of strings. Example 22.20. Listing Available Methods To inspect the device instance object created in Example 22.14, "Accessing Instances" and list all available methods, type the following at the interactive prompt: To assign a list of these methods to a variable named network_device_methods , type: To call a particular method, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. Methods return a three-item tuple consisting of a return value, return value parameters, and an error string. Important LMIInstance objects do not automatically refresh their contents (properties, methods, qualifiers, and so on). To do so, use the refresh() method as described below. Example 22.21. Using Methods The PG_ComputerSystem class represents the system. To create an instance of this class by using the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and assign it to a variable named sys , type the following at the interactive prompt: The LMI_AccountManagementService class implements methods that allow you to manage users and groups in the system. To create an instance of this class and assign it to a variable named acc , type: To create a new user named lmishell-user in the system, use the CreateAccount() method as follows: LMIShell support synchronous method calls: when you use a synchronous method, LMIShell waits for the corresponding Job object to change its state to "finished" and then returns the return parameters of this job. LMIShell is able to perform a synchronous method call if the given method returns an object of one of the following classes: LMI_StorageJob LMI_SoftwareInstallationJob LMI_NetworkJob LMIShell first tries to use indications as the waiting method. If it fails, it uses a polling method instead. To perform a synchronous method call, use the following syntax: Replace instance_object with the name of the instance object to use, method_name with the name of the method to call, parameter with the name of the parameter to set, and value with the value of this parameter. All synchronous methods have the Sync prefix in their name and return a three-item tuple consisting of the job's return value, job's return value parameters, and job's error string. You can also force LMIShell to use only polling method. To do so, specify the PreferPolling parameter as follows: Listing and Viewing ValueMap Parameters CIM methods may contain ValueMap parameters in their Managed Object Format ( MOF ) definition. ValueMap parameters contain constant values. To list all available ValueMap parameters of a particular method, use the print_valuemap_parameters() method as follows: Replace instance_object with the name of the instance object and method_name with the name of the method to inspect. This method prints available ValueMap parameters to standard output. To get a list of available ValueMap parameters, use the valuemap_parameters() method: This method returns a list of strings. Example 22.22. Listing ValueMap Parameters To inspect the acc instance object created in Example 22.21, "Using Methods" and list all available ValueMap parameters of the CreateAccount() method, type the following at the interactive prompt: To assign a list of these ValueMap parameters to a variable named create_account_parameters , type: To access a particular ValueMap parameter, use the following syntax: Replace valuemap_parameter with the name of the ValueMap parameter to access. To list all available constant values, use the print_values() method as follows: This method prints available named constant values to standard output. You can also get a list of available constant values by using the values() method: This method returns a list of strings. Example 22.23. Accessing ValueMap Parameters Example 22.22, "Listing ValueMap Parameters" mentions a ValueMap parameter named CreateAccount . To inspect this parameter and list available constant values, type the following at the interactive prompt: To assign a list of these constant values to a variable named create_account_values , type: To access a particular constant value, use the following syntax: Replace constant_value_name with the name of the constant value. Alternatively, you can use the value() method as follows: To determine the name of a particular constant value, use the value_name() method: This method returns a string. Example 22.24. Accessing Constant Values Example 22.23, "Accessing ValueMap Parameters" shows that the CreateAccount ValueMap parameter provides a constant value named Failed . To access this named constant value, type the following at the interactive prompt: To determine the name of this constant value, type: Refreshing Instance Objects Local objects used by LMIShell, which represent CIM objects at CIMOM side, can get outdated, if such objects change while working with LMIShell's ones. To update the properties and methods of a particular instance object, use the refresh() method as follows: Replace instance_object with the name of the object to refresh. This method returns a three-item tuple consisting of a return value, return value parameter, and an error string. Example 22.25. Refreshing Instance Objects To update the properties and methods of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: Displaying MOF Representation To display the Managed Object Format ( MOF ) representation of an instance object, use the tomof() method as follows: Replace instance_object with the name of the instance object to inspect. This method prints the MOF representation of the object to standard output. Example 22.26. Displaying MOF Representation To display the MOF representation of the device instance object created in Example 22.14, "Accessing Instances" , type the following at the interactive prompt: 22.4.6. Working with Instance Names LMIShell instance names are objects that hold a set of primary keys and their values. This type of an object exactly identifies an instance. Accessing Instance Names CIMInstance objects are identified by CIMInstanceName objects. To get a list of all available instance name objects, use the instance_names() method as follows: Replace class_object with the name of the class object to inspect. This method returns a list of LMIInstanceName objects. To access the first instance name object of a class object, use the first_instance_name() method: This method returns an LMIInstanceName object. In addition to listing all instance name objects or returning the first one, both instance_names() and first_instance_name() support an optional argument to allow you to filter the results: Replace criteria with a dictionary consisting of key-value pairs, where keys represent key properties and values represent required values of these key properties. Example 22.27. Accessing Instance Names To find the first instance name of the cls class object created in Example 22.7, "Accessing Class Objects" that has the Name key property equal to eth0 and assign it to a variable named device_name , type the following at the interactive prompt: Examining Instance Names All instance name objects store information about their class name and the namespace they belong to. To get the class name of a particular instance name object, use the following syntax: Replace instance_name_object with the name of the instance name object to inspect. This returns a string representation of the class name. To get information about the namespace an instance name object belongs to, use: This returns a string representation of the namespace. Example 22.28. Examining Instance Names To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display its class name and the corresponding namespace, type the following at the interactive prompt: Creating New Instance Names LMIShell allows you to create a new wrapped CIMInstanceName object if you know all primary keys of a remote object. This instance name object can then be used to retrieve the whole instance object. To create a new instance name of a class object, use the new_instance_name() method as follows: Replace class_object with the name of the class object and key_properties with a dictionary that consists of key-value pairs, where keys represent key properties and values represent key property values. This method returns an LMIInstanceName object. Example 22.29. Creating New Instance Names The LMI_Account class represents user accounts on the managed system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" and create a new instance name of the LMI_Account class representing the lmishell-user user on the managed system, type the following at the interactive prompt: Listing and Accessing Key Properties To list all available key properties of a particular instance name object, use the print_key_properties() method as follows: Replace instance_name_object with the name of the instance name object to inspect. This method prints available key properties to standard output. To get a list of available key properties, use the key_properties() method: This method returns a list of strings. Example 22.30. Listing Available Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and list all available key properties, type the following at the interactive prompt: To assign a list of these key properties to a variable named device_name_properties , type: To get the current value of a particular key property, use the following syntax: Replace key_property_name with the name of the key property to access. Example 22.31. Accessing Individual Key Properties To inspect the device_name instance name object created in Example 22.27, "Accessing Instance Names" and display the value of the key property named SystemName , type the following at the interactive prompt: Converting Instance Names to Instances Each instance name can be converted to an instance. To do so, use the to_instance() method as follows: Replace instance_name_object with the name of the instance name object to convert. This method returns an LMIInstance object. Example 22.32. Converting Instance Names to Instances To convert the device_name instance name object created in Example 22.27, "Accessing Instance Names" to an instance object and assign it to a variable named device , type the following at the interactive prompt: 22.4.7. Working with Associated Objects The Common Information Model defines an association relationship between managed objects. Accessing Associated Instances To get a list of all objects associated with a particular instance object, use the associators() method as follows: To access the first object associated with a particular instance object, use the first_associator() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned object must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned object must be associated with the source object through an association in which the returned object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether all qualifiers of each object (including qualifiers on the object and on any returned properties) should be included as QUALIFIER elements in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.33. Accessing Associated Instances The LMI_StorageExtent class represents block devices available in the system. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_StorageExtent class for the block device named /dev/vda , and assign it to a variable named vda , type the following at the interactive prompt: To get a list of all disk partitions on this block device and assign it to a variable named vda_partitions , use the associators() method as follows: Accessing Associated Instance Names To get a list of all associated instance names of a particular instance object, use the associator_names() method as follows: To access the first associated instance name of a particular instance object, use the first_associator_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: AssocClass - Each returned name identifies an object that must be associated with the source object through an instance of this class or one of its subclasses. The default value is None . ResultClass - Each returned name identifies an object that must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned name identifies an object that must be associated with the source object through an association in which the source object plays the specified role. The name of the property in the association class that refers to the source object must match the value of this parameter. The default value is None . ResultRole - Each returned name identifies an object that must be associated with the source object through an association in which the returned named object plays the specified role. The name of the property in the association class that refers to the returned object must match the value of this parameter. The default value is None . Example 22.34. Accessing Associated Instance Names To use the vda instance object created in Example 22.33, "Accessing Associated Instances" , get a list of its associated instance names, and assign it to a variable named vda_partitions , type: 22.4.8. Working with Association Objects The Common Information Model defines an association relationship between managed objects. Association objects define the relationship between two other objects. Accessing Association Instances To get a list of association objects that refer to a particular target object, use the references() method as follows: To access the first association object that refers to a particular target object, use the first_reference() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object must be either an instance of this class or one of its subclasses, or it must be this class or one of its subclasses. The default value is None . Role - Each returned object must refer to the target object through a property with a name that matches the value of this parameter. The default value is None . The remaining parameters refer to: IncludeQualifiers - A boolean indicating whether each object (including qualifiers on the object and on any returned properties) should be included as a QUALIFIER element in the response. The default value is False . IncludeClassOrigin - A boolean indicating whether the CLASSORIGIN attribute should be present on all appropriate elements in each returned object. The default value is False . PropertyList - The members of this list define one or more property names. Returned objects will not include elements for any properties missing from this list. If PropertyList is an empty list, no properties are included in returned objects. If it is None , no additional filtering is defined. The default value is None . Example 22.35. Accessing Association Instances The LMI_LANEndpoint class represents a communication endpoint associated with a certain network interface device. To use the ns namespace object created in Example 22.5, "Accessing Namespace Objects" , create an instance of the LMI_LANEndpoint class for the network interface device named eth0, and assign it to a variable named lan_endpoint , type the following at the interactive prompt: To access the first association object that refers to an LMI_BindsToLANEndpoint object and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: Accessing Association Instance Names To get a list of association instance names of a particular instance object, use the reference_names() method as follows: To access the first association instance name of a particular instance object, use the first_reference_name() method: Replace instance_object with the name of the instance object to inspect. You can filter the results by specifying the following parameters: ResultClass - Each returned object name identifies either an instance of this class or one of its subclasses, or this class or one of its subclasses. The default value is None . Role - Each returned object identifies an object that refers to the target instance through a property with a name that matches the value of this parameter. The default value is None . Example 22.36. Accessing Association Instance Names To use the lan_endpoint instance object created in Example 22.35, "Accessing Association Instances" , access the first association instance name that refers to an LMI_BindsToLANEndpoint object, and assign it to a variable named bind , type: You can now use the Dependent property to access the dependent LMI_IPProtocolEndpoint class that represents the IP address of the corresponding network interface device: 22.4.9. Working with Indications Indication is a reaction to a specific event that occurs in response to a particular change in data. LMIShell can subscribe to an indication in order to receive such event responses. Subscribing to Indications To subscribe to an indication, use the subscribe_indication() method as follows: Alternatively, you can use a shorter version of the method call as follows: Replace connection_object with a connection object and host_name with the host name of the system you want to deliver the indications to. By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To change this behavior, pass the Permanent=True keyword parameter to the subscribe_indication() method call. This will prevent LMIShell from deleting the subscription. Example 22.37. Subscribing to Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and subscribe to an indication named cpu , type the following at the interactive prompt: Listing Subscribed Indications To list all the subscribed indications, use the print_subscribed_indications() method as follows: Replace connection_object with the name of the connection object to inspect. This method prints subscribed indications to standard output. To get a list of subscribed indications, use the subscribed_indications() method: This method returns a list of strings. Example 22.38. Listing Subscribed Indications To inspect the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and list all subscribed indications, type the following at the interactive prompt: To assign a list of these indications to a variable named indications , type: Unsubscribing from Indications By default, all subscriptions created by the LMIShell interpreter are automatically deleted when the interpreter terminates. To delete an individual subscription sooner, use the unsubscribe_indication() method as follows: Replace connection_object with the name of the connection object and indication_name with the name of the indication to delete. To delete all subscriptions, use the unsubscribe_all_indications() method: Example 22.39. Unsubscribing from Indications To use the c connection object created in Example 22.1, "Connecting to a Remote CIMOM" and unsubscribe from the indication created in Example 22.37, "Subscribing to Indications" , type the following at the interactive prompt: Implementing an Indication Handler The subscribe_indication() method allows you to specify the host name of the system you want to deliver the indications to. The following example shows how to implement an indication handler: The first argument of the handler is an LmiIndication object, which contains a list of methods and objects exported by the indication. Other parameters are user specific: those arguments need to be specified when adding a handler to the listener. In the example above, the add_handler() method call uses a special string with eight "X" characters. These characters are replaced with a random string that is generated by listeners in order to avoid a possible handler name collision. To use the random string, start the indication listener first and then subscribe to an indication so that the Destination property of the handler object contains the following value: schema :// host_name / random_string . Example 22.40. Implementing an Indication Handler The following script illustrates how to write a handler that monitors a managed system located at 192.168.122.1 and calls the indication_callback() function whenever a new user account is created: 22.4.10. Example Usage This section provides a number of examples for various CIM providers distributed with the OpenLMI packages. All examples in this section use the following two variable definitions: Replace host_name with the host name of the managed system, user_name with the name of user that is allowed to connect to OpenPegasus CIMOM running on that system, and password with the user's password. Using the OpenLMI Service Provider The openlmi-service package installs a CIM provider for managing system services. The examples below illustrate how to use this CIM provider to list available system services and how to start, stop, enable, and disable them. Example 22.41. Listing Available Services To list all available services on the managed machine along with information regarding whether the service has been started ( TRUE ) or stopped ( FALSE ) and the status string, use the following code snippet: To list only the services that are enabled by default, use this code snippet: Note that the value of the EnabledDefault property is equal to 2 for enabled services and 3 for disabled services. To display information about the cups service, use the following: Example 22.42. Starting and Stopping Services To start and stop the cups service and to see its current status, use the following code snippet: Example 22.43. Enabling and Disabling Services To enable and disable the cups service and to display its EnabledDefault property, use the following code snippet: Using the OpenLMI Networking Provider The openlmi-networking package installs a CIM provider for networking. The examples below illustrate how to use this CIM provider to list IP addresses associated with a certain port number, create a new connection, configure a static IP address, and activate a connection. Example 22.44. Listing IP Addresses Associated with a Given Port Number To list all IP addresses associated with the eth0 network interface, use the following code snippet: This code snippet uses the LMI_IPProtocolEndpoint class associated with a given LMI_IPNetworkConnection class. To display the default gateway, use this code snippet: The default gateway is represented by an LMI_NetworkRemoteServiceAccessPoint instance with the AccessContext property equal to DefaultGateway . To get a list of DNS servers, the object model needs to be traversed as follows: Get the LMI_IPProtocolEndpoint instances associated with a given LMI_IPNetworkConnection using LMI_NetworkSAPSAPDependency . Use the same association for the LMI_DNSProtocolEndpoint instances. The LMI_NetworkRemoteServiceAccessPoint instances with the AccessContext property equal to the DNS Server associated through LMI_NetworkRemoteAccessAvailableToElement have the DNS server address in the AccessInfo property. There can be more possible paths to get to the RemoteServiceAccessPath and entries can be duplicated. The following code snippet uses the set() function to remove duplicate entries from the list of DNS servers: Example 22.45. Creating a New Connection and Configuring a Static IP Address To create a new setting with a static IPv4 and stateless IPv6 configuration for network interface eth0, use the following code snippet: This code snippet creates a new setting by calling the LMI_CreateIPSetting() method on the instance of LMI_IPNetworkConnectionCapabilities , which is associated with LMI_IPNetworkConnection through LMI_IPNetworkConnectionElementCapabilities . It also uses the push() method to modify the setting. Example 22.46. Activating a Connection To apply a setting to the network interface, call the ApplySettingToIPNetworkConnection() method of the LMI_IPConfigurationService class. This method is asynchronous and returns a job. The following code snippets illustrates how to call this method synchronously: The Mode parameter affects how the setting is applied. The most commonly used values of this parameter are as follows: 1 - apply the setting now and make it auto-activated. 2 - make the setting auto-activated and do not apply it now. 4 - disconnect and disable auto-activation. 5 - do not change the setting state, only disable auto-activation. 32768 - apply the setting. 32769 - disconnect. Using the OpenLMI Storage Provider The openlmi-storage package installs a CIM provider for storage management. The examples below illustrate how to use this CIM provider to create a volume group, create a logical volume, build a file system, mount a file system, and list block devices known to the system. In addition to the c and ns variables, these examples use the following variable definitions: Example 22.47. Creating a Volume Group To create a new volume group located in /dev/myGroup/ that has three members and the default extent size of 4 MB, use the following code snippet: Example 22.48. Creating a Logical Volume To create two logical volumes with the size of 100 MB, use this code snippet: Example 22.49. Creating a File System To create an ext3 file system on logical volume lv from Example 22.48, "Creating a Logical Volume" , use the following code snippet: Example 22.50. Mounting a File System To mount the file system created in Example 22.49, "Creating a File System" , use the following code snippet: Example 22.51. Listing Block Devices To list all block devices known to the system, use the following code snippet: Using the OpenLMI Hardware Provider The openlmi-hardware package installs a CIM provider for monitoring hardware. The examples below illustrate how to use this CIM provider to retrieve information about CPU, memory modules, PCI devices, and the manufacturer and model of the machine. Example 22.52. Viewing CPU Information To display basic CPU information such as the CPU name, the number of processor cores, and the number of hardware threads, use the following code snippet: Example 22.53. Viewing Memory Information To display basic information about memory modules such as their individual sizes, use the following code snippet: Example 22.54. Viewing Chassis Information To display basic information about the machine such as its manufacturer or its model, use the following code snippet: Example 22.55. Listing PCI Devices To list all PCI devices known to the system, use the following code snippet: 22.5. Using OpenLMI Scripts The LMIShell interpreter is built on top of Python modules that can be used to develop custom management tools. The OpenLMI Scripts project provides a number of Python libraries for interfacing with OpenLMI providers. In addition, it is distributed with lmi , an extensible utility that can be used to interact with these libraries from the command line. To install OpenLMI Scripts on your system, type the following at a shell prompt: This command installs the Python modules and the lmi utility in the ~/.local/ directory. To extend the functionality of the lmi utility, install additional OpenLMI modules by using the following command: For a complete list of available modules, see the Python website . For more information about OpenLMI Scripts, see the official OpenLMI Scripts documentation . 22.6. Additional Resources For more information about OpenLMI and system management in general, see the resources listed below. Installed Documentation lmishell (1) - The manual page for the lmishell client and interpreter provides detailed information about its execution and usage. Online Documentation Red Hat Enterprise Linux 7 Networking Guide - The Networking Guide for Red Hat Enterprise Linux 7 documents relevant information regarding the configuration and administration of network interfaces and network services on the system. Red Hat Enterprise Linux 7 Storage Administration Guide - The Storage Administration Guide for Red Hat Enterprise Linux 7 provides instructions on how to manage storage devices and file systems on the system. Red Hat Enterprise Linux 7 Power Management Guide - The Power Management Guide for Red Hat Enterprise Linux 7 explains how to manage power consumption of the system effectively. It discusses different techniques that lower power consumption for both servers and laptops, and explains how each technique affects the overall performance of the system. Red Hat Enterprise Linux 7 Linux Domain Identity, Authentication, and Policy Guide - The Linux Domain Identity, Authentication, and Policy Guide for Red Hat Enterprise Linux 7 covers all aspects of installing, configuring, and managing IPA domains, including both servers and clients. The guide is intended for IT and systems administrators. FreeIPA Documentation - The FreeIPA Documentation serves as the primary user documentation for using the FreeIPA Identity Management project. OpenSSL Home Page - The OpenSSL home page provides an overview of the OpenSSL project. Mozilla NSS Documentation - The Mozilla NSS Documentation serves as the primary user documentation for using the Mozilla NSS project. See Also Chapter 4, Managing Users and Groups documents how to manage system users and groups in the graphical user interface and on the command line. Chapter 9, Yum describes how to use the Yum package manager to search, install, update, and uninstall packages on the command line. Chapter 10, Managing Services with systemd provides an introduction to systemd and documents how to use the systemctl command to manage system services, configure systemd targets, and execute power management commands. Chapter 12, OpenSSH describes how to configure an SSH server and how to use the ssh , scp , and sftp client utilities to access it. | [
"install tog-pegasus",
"install openlmi-{storage,networking,service,account,powermanagement}",
"passwd pegasus",
"systemctl start tog-pegasus.service",
"systemctl enable tog-pegasus.service",
"firewall-cmd --add-port 5989/tcp",
"firewall-cmd --permanent --add-port 5989/tcp",
"install openlmi-tools",
"systemctl restart tog-pegasus.service",
"scp root@ hostname :/etc/Pegasus/server.pem /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"sha1sum /etc/Pegasus/server.pem",
"sha1sum /etc/pki/ca-trust/source/anchors/pegasus- hostname .pem",
"update-ca-trust extract",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"ipa service-add CIMOM/ hostname",
"ipa-getcert request -f /etc/Pegasus/server.pem -k /etc/Pegasus/file.pem -N CN= hostname -K CIMOM/ hostname",
"cp /etc/ipa/ca.crt /etc/pki/ca-trust/source/anchors/ipa.crt",
"update-ca-trust extract",
"update-ca-trust extract",
"cp /path/to/ca.crt /etc/pki/ca-trust/source/anchors/ca.crt",
"update-ca-trust extract",
"[ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] C = US ST = Massachusetts L = Westford O = Fedora OU = Fedora OpenLMI CN = hostname",
"openssl genrsa -out /etc/Pegasus/file.pem 1024",
"openssl req -config /etc/Pegasus/ssl.cnf -new -key /etc/Pegasus/file.pem -out /etc/Pegasus/server.csr",
"cp /path/to/ca.crt /etc/Pegasus/client.pem",
"lmishell",
"lmishell --noverify",
"> (reverse-i-search)` connect ': c = connect(\"server.example.com\", \"pegasus\")",
"clear_history ()",
"history_file = \"~/.lmishell_history\" history_length = 1000",
"use_exceptions ()",
"use_exception ( False )",
"use_exceptions = True",
"object_name . clear_cache ()",
"object_name . use_cache ( False )",
"object_name . use_cache ( True )",
"use_cache = False",
"> quit() ~]USD",
"lmishell file_name",
"lmishell --interact file_name",
"connect ( host_name , user_name , password )",
"> c = connect(\"server.example.com\", \"pegasus\") password: >",
"connect ( host_name )",
"> c = connect(\"localhost\") >",
"isinstance ( object_name , LMIConnection )",
"> isinstance(c, LMIConnection) True >",
"> c is None False >",
"object_name . print_namespaces ()",
"object_name . namespaces",
"> c.root.print_namespaces() cimv2 interop PG_InterOp PG_Internal >",
"> root_namespaces = c.root.namespaces >",
"object_name . namespace_name",
"> ns = c.root.cimv2 >",
"namespace_object . print_classes()",
"namespace_object . classes ()",
"> ns.print_classes() CIM_CollectionInSystem CIM_ConcreteIdentity CIM_ControlledBy CIM_DeviceSAPImplementation CIM_MemberOfStatusCollection >",
"> cimv2_classes = ns.classes() >",
"namespace_object . class_name",
"> cls = ns.LMI_IPNetworkConnection >",
"class_object . classname",
"class_object . namespace",
"class_object . doc ()",
"> cls.classname 'LMI_IPNetworkConnection' > cls.namespace 'root/cimv2' >",
"> cls.doc() Class: LMI_IPNetworkConnection SuperClass: CIM_IPNetworkConnection [qualifier] string UMLPackagePath: 'CIM::Network::IP' [qualifier] string Version: '0.1.0'",
"class_object . print_methods ()",
"class_object . methods()",
"> cls.print_methods() RequestStateChange >",
"> service_methods = cls.methods() >",
"class_object . print_properties ()",
"class_object . properties ()",
"> cls.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> service_properties = cls.properties() >",
"class_object . print_valuemap_properties ()",
"class_object . valuemap_properties ()",
"> cls.print_valuemap_properties() RequestedState HealthState TransitioningToState DetailedStatus OperationalStatus >",
"> service_valuemap_properties = cls.valuemap_properties() >",
"class_object . valuemap_property Values",
"class_object . valuemap_property Values . print_values ()",
"class_object . valuemap_property Values . values ()",
"> cls.RequestedStateValues.print_values() Reset NoChange NotApplicable Quiesce Unknown >",
"> requested_state_values = cls.RequestedStateValues.values() >",
"class_object . valuemap_property Values . constant_value_name",
"class_object . valuemap_property Values . value (\" constant_value_name \")",
"class_object . valuemap_property Values . value_name (\" constant_value \")",
"> cls.RequestedStateValues.Reset 11 > cls.RequestedStateValues.value(\"Reset\") 11 >",
"> cls.RequestedStateValues.value_name(11) u'Reset' >",
"class_object . fetch ()",
"class_object . instances ()",
"class_object . first_instance ()",
"class_object . instances ( criteria )",
"class_object . first_instance ( criteria )",
"> device = cls.first_instance({\"ElementName\": \"eth0\"}) >",
"instance_object . classname",
"instance_object . namespace",
"instance_object . path",
"instance_object . doc ()",
"> device.classname u'LMI_IPNetworkConnection' > device.namespace 'root/cimv2' >",
"> device.doc() Instance of LMI_IPNetworkConnection [property] uint16 RequestedState = '12' [property] uint16 HealthState [property array] string [] StatusDescriptions",
"class_object . create_instance ( properties )",
"> group = ns.LMI_Group.first_instance({\"Name\" : \"pegasus\"}) > user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> identity = user.first_associator(ResultClass=\"LMI_Identity\") >",
"> ns.LMI_MemberOfGroup.create_instance({ ... \"Member\" : identity.path, ... \"Collection\" : group.path}) LMIInstance(classname=\"LMI_MemberOfGroup\", ...) >",
"instance_object . delete ()",
"> user = ns.LMI_Account.first_instance({\"Name\" : \"lmishell-user\"}) >",
"> user.delete() True >",
"instance_object . print_properties ()",
"instance_object . properties ()",
"> device.print_properties() RequestedState HealthState StatusDescriptions TransitioningToState Generation >",
"> device_properties = device.properties() >",
"instance_object . property_name",
"instance_object . property_name = value",
"instance_object . push ()",
"> device.SystemName u'server.example.com' >",
"instance_object . print_methods ()",
"instance_object . methods ()",
"> device.print_methods() RequestStateChange >",
"> network_device_methods = device.methods() >",
"instance_object . method_name ( parameter = value , ...)",
"> sys = ns.PG_ComputerSystem.first_instance() >",
"> acc = ns.LMI_AccountManagementService.first_instance() >",
"> acc.CreateAccount(Name=\"lmishell-user\", System=sys) LMIReturnValue(rval=0, rparams=NocaseDict({u'Account': LMIInstanceName(classname=\"LMI_Account\"...), u'Identities': [LMIInstanceName(classname=\"LMI_Identity\"...), LMIInstanceName(classname=\"LMI_Identity\"...)]}), errorstr='')",
"instance_object . Sync method_name ( parameter = value , ...)",
"instance_object . Sync method_name ( PreferPolling = True parameter = value , ...)",
"instance_object . method_name . print_valuemap_parameters ()",
"instance_object . method_name . valuemap_parameters ()",
"> acc.CreateAccount.print_valuemap_parameters() CreateAccount >",
"> create_account_parameters = acc.CreateAccount.valuemap_parameters() >",
"instance_object . method_name . valuemap_parameter Values",
"instance_object . method_name . valuemap_parameter Values . print_values ()",
"instance_object . method_name . valuemap_parameter Values . values ()",
"> acc.CreateAccount.CreateAccountValues.print_values() Operationunsupported Failed Unabletosetpasswordusercreated Unabletocreatehomedirectoryusercreatedandpasswordset Operationcompletedsuccessfully >",
"> create_account_values = acc.CreateAccount.CreateAccountValues.values() >",
"instance_object . method_name . valuemap_parameter Values . constant_value_name",
"instance_object . method_name . valuemap_parameter Values . value (\" constant_value_name \")",
"instance_object . method_name . valuemap_parameter Values . value_name (\" constant_value \")",
"> acc.CreateAccount.CreateAccountValues.Failed 2 > acc.CreateAccount.CreateAccountValues.value(\"Failed\") 2 >",
"> acc.CreateAccount.CreateAccountValues.value_name(2) u'Failed' >",
"instance_object . refresh ()",
"> device.refresh() LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"instance_object . tomof ()",
"> device.tomof() instance of LMI_IPNetworkConnection { RequestedState = 12; HealthState = NULL; StatusDescriptions = NULL; TransitioningToState = 12;",
"class_object . instance_names ()",
"class_object . first_instance_name ()",
"class_object . instance_names ( criteria )",
"class_object . first_instance_name ( criteria )",
"> device_name = cls.first_instance_name({\"Name\": \"eth0\"}) >",
"instance_name_object . classname",
"instance_name_object . namespace",
"> device_name.classname u'LMI_IPNetworkConnection' > device_name.namespace 'root/cimv2' >",
"class_object . new_instance_name ( key_properties )",
"> instance_name = ns.LMI_Account.new_instance_name({ ... \"CreationClassName\" : \"LMI_Account\", ... \"Name\" : \"lmishell-user\", ... \"SystemCreationClassName\" : \"PG_ComputerSystem\", ... \"SystemName\" : \"server\"}) >",
"instance_name_object . print_key_properties ()",
"instance_name_object . key_properties ()",
"> device_name.print_key_properties() CreationClassName SystemName Name SystemCreationClassName >",
"> device_name_properties = device_name.key_properties() >",
"instance_name_object . key_property_name",
"> device_name.SystemName u'server.example.com' >",
"instance_name_object . to_instance ()",
"> device = device_name.to_instance() >",
"instance_object . associators ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_associator ( AssocClass= class_name , ResultClass= class_name , ResultRole= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"> vda = ns.LMI_StorageExtent.first_instance({ ... \"DeviceID\" : \"/dev/vda\"}) >",
"> vda_partitions = vda.associators(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . associator_names ( AssocClass= class_name , ResultClass= class_name , Role= role , ResultRole= role )",
"instance_object . first_associator_name ( AssocClass= class_object , ResultClass= class_object , Role= role , ResultRole= role )",
"> vda_partitions = vda.associator_names(ResultClass=\"LMI_DiskPartition\") >",
"instance_object . references ( ResultClass= class_name , Role= role , IncludeQualifiers= include_qualifiers , IncludeClassOrigin= include_class_origin , PropertyList= property_list )",
"instance_object . first_reference ( ... ResultClass= class_name , ... Role= role , ... IncludeQualifiers= include_qualifiers , ... IncludeClassOrigin= include_class_origin , ... PropertyList= property_list ) >",
"> lan_endpoint = ns.LMI_LANEndpoint.first_instance({ ... \"Name\" : \"eth0\"}) >",
"> bind = lan_endpoint.first_reference( ... ResultClass=\"LMI_BindsToLANEndpoint\") >",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"instance_object . reference_names ( ResultClass= class_name , Role= role )",
"instance_object . first_reference_name ( ResultClass= class_name , Role= role )",
"> bind = lan_endpoint.first_reference_name( ... ResultClass=\"LMI_BindsToLANEndpoint\")",
"> ip = bind.Dependent.to_instance() > print ip.IPv4Address 192.168.122.1 >",
"connection_object . subscribe_indication ( QueryLanguage= \"WQL\" , Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , CreationNamespace= \"root/interop\" , SubscriptionCreationClassName= \"CIM_IndicationSubscription\" , FilterCreationClassName= \"CIM_IndicationFilter\" , FilterSystemCreationClassName= \"CIM_ComputerSystem\" , FilterSourceNamespace= \"root/cimv2\" , HandlerCreationClassName= \"CIM_IndicationHandlerCIMXML\" , HandlerSystemCreationClassName= \"CIM_ComputerSystem\" , Destination= \"http://host_name:5988\" )",
"connection_object . subscribe_indication ( Query= 'SELECT * FROM CIM_InstModification' , Name= \"cpu\" , Destination= \"http://host_name:5988\" )",
"> c.subscribe_indication( ... QueryLanguage=\"WQL\", ... Query='SELECT * FROM CIM_InstModification', ... Name=\"cpu\", ... CreationNamespace=\"root/interop\", ... SubscriptionCreationClassName=\"CIM_IndicationSubscription\", ... FilterCreationClassName=\"CIM_IndicationFilter\", ... FilterSystemCreationClassName=\"CIM_ComputerSystem\", ... FilterSourceNamespace=\"root/cimv2\", ... HandlerCreationClassName=\"CIM_IndicationHandlerCIMXML\", ... HandlerSystemCreationClassName=\"CIM_ComputerSystem\", ... Destination=\"http://server.example.com:5988\") LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"connection_object . print_subscribed_indications ()",
"connection_object . subscribed_indications ()",
"> c.print_subscribed_indications() >",
"> indications = c.subscribed_indications() >",
"connection_object . unsubscribe_indication ( indication_name )",
"connection_object . unsubscribe_all_indications ()",
"> c.unsubscribe_indication('cpu') LMIReturnValue(rval=True, rparams=NocaseDict({}), errorstr='') >",
"> def handler(ind, arg1, arg2, kwargs): ... exported_objects = ind.exported_objects() ... do_something_with(exported_objects) > listener = LmiIndicationListener(\"0.0.0.0\", listening_port) > listener.add_handler(\"indication-name-XXXXXXXX\", handler, arg1, arg2, kwargs) > listener.start() >",
"#!/usr/bin/lmishell import sys from time import sleep from lmi.shell.LMIUtil import LMIPassByRef from lmi.shell.LMIIndicationListener import LMIIndicationListener These are passed by reference to indication_callback var1 = LMIPassByRef(\"some_value\") var2 = LMIPassByRef(\"some_other_value\") def indication_callback(ind, var1, var2): # Do something with ind, var1 and var2 print ind.exported_objects() print var1.value print var2.value c = connect(\"hostname\", \"username\", \"password\") listener = LMIIndicationListener(\"0.0.0.0\", 65500) unique_name = listener.add_handler( \"demo-XXXXXXXX\", # Creates a unique name for me indication_callback, # Callback to be called var1, # Variable passed by ref var2 # Variable passed by ref ) listener.start() print c.subscribe_indication( Name=unique_name, Query=\"SELECT * FROM LMI_AccountInstanceCreationIndication WHERE SOURCEINSTANCE ISA LMI_Account\", Destination=\"192.168.122.1:65500\" ) try: while True: sleep(60) except KeyboardInterrupt: sys.exit(0)",
"c = connect(\"host_name\", \"user_name\", \"password\") ns = c.root.cimv2",
"for service in ns.LMI_Service.instances(): print \"%s:\\t%s\" % (service.Name, service.Status)",
"cls = ns.LMI_Service for service in cls.instances(): if service.EnabledDefault == cls.EnabledDefaultValues.Enabled: print service.Name",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.doc()",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.StartService() print cups.Status cups.StopService() print cups.Status",
"cups = ns.LMI_Service.first_instance({\"Name\": \"cups.service\"}) cups.TurnServiceOff() print cups.EnabledDefault cups.TurnServiceOn() print cups.EnabledDefault",
"device = ns.LMI_IPNetworkConnection.first_instance({'ElementName': 'eth0'}) for endpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): if endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv4: print \"IPv4: %s/%s\" % (endpoint.IPv4Address, endpoint.SubnetMask) elif endpoint.ProtocolIFType == ns.LMI_IPProtocolEndpoint.ProtocolIFTypeValues.IPv6: print \"IPv6: %s/%d\" % (endpoint.IPv6Address, endpoint.IPv6SubnetPrefixLength)",
"for rsap in device.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DefaultGateway: print \"Default Gateway: %s\" % rsap.AccessInfo",
"dnsservers = set() for ipendpoint in device.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_IPProtocolEndpoint\"): for dnsedpoint in ipendpoint.associators(AssocClass=\"LMI_NetworkSAPSAPDependency\", ResultClass=\"LMI_DNSProtocolEndpoint\"): for rsap in dnsedpoint.associators(AssocClass=\"LMI_NetworkRemoteAccessAvailableToElement\", ResultClass=\"LMI_NetworkRemoteServiceAccessPoint\"): if rsap.AccessContext == ns.LMI_NetworkRemoteServiceAccessPoint.AccessContextValues.DNSServer: dnsservers.add(rsap.AccessInfo) print \"DNS:\", \", \".join(dnsservers)",
"capability = ns.LMI_IPNetworkConnectionCapabilities.first_instance({ 'ElementName': 'eth0' }) result = capability.LMI_CreateIPSetting(Caption='eth0 Static', IPv4Type=capability.LMI_CreateIPSetting.IPv4TypeValues.Static, IPv6Type=capability.LMI_CreateIPSetting.IPv6TypeValues.Stateless) setting = result.rparams[\"SettingData\"].to_instance() for settingData in setting.associators(AssocClass=\"LMI_OrderedIPAssignmentComponent\"): if setting.ProtocolIFType == ns.LMI_IPAssignmentSettingData.ProtocolIFTypeValues.IPv4: # Set static IPv4 address settingData.IPAddresses = [\"192.168.1.100\"] settingData.SubnetMasks = [\"255.255.0.0\"] settingData.GatewayAddresses = [\"192.168.1.1\"] settingData.push()",
"setting = ns.LMI_IPAssignmentSettingData.first_instance({ \"Caption\": \"eth0 Static\" }) port = ns.LMI_IPNetworkConnection.first_instance({ 'ElementName': 'ens8' }) service = ns.LMI_IPConfigurationService.first_instance() service.SyncApplySettingToIPNetworkConnection(SettingData=setting, IPNetworkConnection=port, Mode=32768)",
"MEGABYTE = 1024*1024 storage_service = ns.LMI_StorageConfigurationService.first_instance() filesystem_service = ns.LMI_FileSystemConfigurationService.first_instance()",
"Find the devices to add to the volume group (filtering the CIM_StorageExtent.instances() call would be faster, but this is easier to read): sda1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sda1\"}) sdb1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdb1\"}) sdc1 = ns.CIM_StorageExtent.first_instance({\"Name\": \"/dev/sdc1\"}) Create a new volume group: (ret, outparams, err) = storage_service.SyncCreateOrModifyVG( ElementName=\"myGroup\", InExtents=[sda1, sdb1, sdc1]) vg = outparams['Pool'].to_instance() print \"VG\", vg.PoolID, \"with extent size\", vg.ExtentSize, \"and\", vg.RemainingExtents, \"free extents created.\"",
"Find the volume group: vg = ns.LMI_VGStoragePool.first_instance({\"Name\": \"/dev/mapper/myGroup\"}) Create the first logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol1\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\" Create the second logical volume: (ret, outparams, err) = storage_service.SyncCreateOrModifyLV( ElementName=\"Vol2\", InPool=vg, Size=100 * MEGABYTE) lv = outparams['TheElement'].to_instance() print \"LV\", lv.DeviceID, \"with\", lv.BlockSize * lv.NumberOfBlocks, \"bytes created.\"",
"(ret, outparams, err) = filesystem_service.SyncLMI_CreateFileSystem( FileSystemType=filesystem_service.LMI_CreateFileSystem.FileSystemTypeValues.EXT3, InExtents=[lv])",
"Find the file system on the logical volume: fs = lv.first_associator(ResultClass=\"LMI_LocalFileSystem\") mount_service = ns.LMI_MountConfigurationService.first_instance() (rc, out, err) = mount_service.SyncCreateMount( FileSystemType='ext3', Mode=32768, # just mount FileSystem=fs, MountPoint='/mnt/test', FileSystemSpec=lv.Name)",
"devices = ns.CIM_StorageExtent.instances() for device in devices: if lmi_isinstance(device, ns.CIM_Memory): # Memory and CPU caches are StorageExtents too, do not print them continue print device.classname, print device.DeviceID, print device.Name, print device.BlockSize*device.NumberOfBlocks",
"cpu = ns.LMI_Processor.first_instance() cpu_cap = cpu.associators(ResultClass=\"LMI_ProcessorCapabilities\")[0] print cpu.Name print cpu_cap.NumberOfProcessorCores print cpu_cap.NumberOfHardwareThreads",
"mem = ns.LMI_Memory.first_instance() for i in mem.associators(ResultClass=\"LMI_PhysicalMemory\"): print i.Name",
"chassis = ns.LMI_Chassis.first_instance() print chassis.Manufacturer print chassis.Model",
"for pci in ns.LMI_PCIDevice.instances(): print pci.Name",
"easy_install --user openlmi-scripts",
"easy_install --user package_name"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/chap-openlmi |
Chapter 54. CertificateAuthority schema reference | Chapter 54. CertificateAuthority schema reference Used in: KafkaSpec Configuration of how TLS certificates are used within the cluster. This applies to certificates used for both internal communication within the cluster and to certificates used for client access via Kafka.spec.kafka.listeners.tls . Property Property type Description generateCertificateAuthority boolean If true then Certificate Authority certificates will be generated automatically. Otherwise the user will need to provide a Secret with the CA certificate. Default is true. generateSecretOwnerReference boolean If true , the Cluster and Client CA Secrets are configured with the ownerReference set to the Kafka resource. If the Kafka resource is deleted when true , the CA Secrets are also deleted. If false , the ownerReference is disabled. If the Kafka resource is deleted when false , the CA Secrets are retained and available for reuse. Default is true . validityDays integer The number of days generated certificates should be valid for. The default is 365. renewalDays integer The number of days in the certificate renewal period. This is the number of days before the a certificate expires during which renewal actions may be performed. When generateCertificateAuthority is true, this will cause the generation of a new certificate. When generateCertificateAuthority is true, this will cause extra logging at WARN level about the pending certificate expiry. Default is 30. certificateExpirationPolicy string (one of [replace-key, renew-certificate]) How should CA certificate expiration be handled when generateCertificateAuthority=true . The default is for a new CA certificate to be generated reusing the existing private key. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-certificateauthority-reference |
3.9. Additional Configuration for the Active Directory Domain Entry | 3.9. Additional Configuration for the Active Directory Domain Entry Custom settings for each individual domain can be defined in the /etc/realmd.conf file. Each domain can have its own configuration section; the name of the section must match the domain name. For example: Important Changing the configuration as described in this section only works if the realm join command has not been run yet. If a system is already joined, changing these settings does not have any effect. In such situations, you must leave the domain, as described in Section 3.5, "Removing a System from an Identity Domain" , and then join again, as described in the section called "Joining a Domain" . Note that joining requires the domain administrator's credentials. To change the configuration for a domain, edit the corresponding section in /etc/realmd.conf . The following example disables ID mapping for the ad.example.com domain, sets the host principal, and adds the system to the specified subtree: Note that the same configuration can also be set when originally joining the system to the domain using the realm join command, described in the section called "Joining a Domain" : Table 3.2, "Realm Configuration Options" lists the most notable options that can be set in the domain default section in /etc/realmd.conf . For complete information about the available configuration options, see the realmd.conf (5) man page. Table 3.2. Realm Configuration Options Option Description computer-ou Sets the directory location for adding computer accounts to the domain. This can be the full DN or an RDN, relative to the root entry. The subtree must already exist. user-principal Sets the userPrincipalName attribute value of the computer account to the provided Kerberos principal. automatic-id-mapping Sets whether to enable dynamic ID mapping or disable the mapping and use POSIX attributes configured in Active Directory. | [
"[ad.example.com] attribute = value attribute = value",
"[ad.example.com] computer-ou = ou=Linux Computers,DC=domain,DC=example,DC=com user-principal = host/[email protected] automatic-id-mapping = no",
"realm join --computer-ou= \"ou=Linux Computers,dc=domain,dc=com\" --automatic-id-mapping= no --user-principal= host/[email protected]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/realmd-conf |
5.4. Removing a Disk from a Logical Volume | 5.4. Removing a Disk from a Logical Volume These example procedures show how you can remove a disk from an existing logical volume, either to replace the disk or to use the disk as part of a different volume. In order to remove a disk, you must first move the extents on the LVM physical volume to a different disk or set of disks. 5.4.1. Moving Extents to Existing Physical Volumes In this example, the logical volume is distributed across four physical volumes in the volume group myvg . This examples moves the extents off of /dev/sdb1 so that it can be removed from the volume group. If there are enough free extents on the other physical volumes in the volume group, you can execute the pvmove command on the device you want to remove with no other options and the extents will be distributed to the other devices. After the pvmove command has finished executing, the distribution of extents is as follows: Use the vgreduce command to remove the physical volume /dev/sdb1 from the volume group. The disk can now be physically removed or allocated to other users. 5.4.2. Moving Extents to a New Disk In this example, the logical volume is distributed across three physical volumes in the volume group myvg as follows: This example procedure moves the extents of /dev/sdb1 to a new device, /dev/sdd1 . Create a new physical volume from /dev/sdd1 . Add the new physical volume /dev/sdd1 to the existing volume group myvg . Use the pvmove command to move the data from /dev/sdb1 to /dev/sdd1 . After you have moved the data off /dev/sdb1 , you can remove it from the volume group. You can now reallocate the disk to another volume group or remove the disk from the system. | [
"pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdb1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdd1 myvg lvm2 a- 17.15G 2.15G 15.00G",
"pvmove /dev/sdb1 /dev/sdb1: Moved: 2.0% /dev/sdb1: Moved: 79.2% /dev/sdb1: Moved: 100.0%",
"pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 17.15G 0 /dev/sdc1 myvg lvm2 a- 17.15G 12.15G 5.00G /dev/sdd1 myvg lvm2 a- 17.15G 2.15G 15.00G",
"vgreduce myvg /dev/sdb1 Removed \"/dev/sdb1\" from volume group \"myvg\" pvs PV VG Fmt Attr PSize PFree /dev/sda1 myvg lvm2 a- 17.15G 7.15G /dev/sdb1 lvm2 -- 17.15G 17.15G /dev/sdc1 myvg lvm2 a- 17.15G 12.15G /dev/sdd1 myvg lvm2 a- 17.15G 2.15G",
"pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G",
"pvcreate /dev/sdd1 Physical volume \"/dev/sdd1\" successfully created",
"vgextend myvg /dev/sdd1 Volume group \"myvg\" successfully extended pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdd1 myvg lvm2 a- 17.15G 17.15G 0",
"pvmove /dev/sdb1 /dev/sdd1 /dev/sdb1: Moved: 10.0% /dev/sdb1: Moved: 79.7% /dev/sdb1: Moved: 100.0% pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 17.15G 0 /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdd1 myvg lvm2 a- 17.15G 15.15G 2.00G",
"vgreduce myvg /dev/sdb1 Removed \"/dev/sdb1\" from volume group \"myvg\""
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/disk_remove_ex |
Chapter 14. Security | Chapter 14. Security SCAP Security Guide The scap-security-guide package has been included in Red Hat Enterprise Linux 7.1 to provide security guidance, baselines, and associated validation mechanisms. The guidance is specified in the Security Content Automation Protocol ( SCAP ), which constitutes a catalog of practical hardening advice. SCAP Security Guide contains the necessary data to perform system security compliance scans regarding prescribed security policy requirements; both a written description and an automated test (probe) are included. By automating the testing, SCAP Security Guide provides a convenient and reliable way to verify system compliance regularly. The Red Hat Enterprise Linux 7.1 version of the SCAP Security Guide includes the Red Hat Corporate Profile for Certified Cloud Providers ( RH CCP ) , which can be used for compliance scans of Red Hat Enterprise Linux Server 7.1 cloud systems. Also, the Red Hat Enterprise Linux 7.1 scap-security-guide package contains SCAP datastream content format files for Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, so that remote compliance scanning of both of these products is possible. The Red Hat Enterprise Linux 7.1 system administrator can use the oscap command line tool from the openscap-scanner package to verify that the system conforms to the provided guidelines. See the scap-security-guide (8) manual page for further information. SELinux Policy In Red Hat Enterprise Linux 7.1, the SELinux policy has been modified; services without their own SELinux policy that previously ran in the init_t domain now run in the newly-added unconfined_service_t domain. See the Unconfined Processes chapter in the SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7.1. New Features in OpenSSH The OpenSSH set of tools has been updated to version 6.6.1p1, which adds several new features related to cryptography: Key exchange using elliptic-curve Diffie-Hellman in Daniel Bernstein's Curve25519 is now supported. This method is now the default provided both the server and the client support it. Support has been added for using the Ed25519 elliptic-curve signature scheme as a public key type. Ed25519 , which can be used for both user and host keys, offers better security than ECDSA and DSA as well as good performance. A new private-key format has been added that uses the bcrypt key-derivation function ( KDF ). By default, this format is used for Ed25519 keys but may be requested for other types of keys as well. A new transport cipher, [email protected] , has been added. It combines Daniel Bernstein's ChaCha20 stream cipher and the Poly1305 message authentication code (MAC). New Features in Libreswan The Libreswan implementation of IPsec VPN has been updated to version 3.12, which adds several new features and improvements: New ciphers have been added. IKEv2 support has been improved. Intermediary certificate chain support has been added in IKEv1 and IKEv2 . Connection handling has been improved. Interoperability has been improved with OpenBSD, Cisco, and Android systems. systemd support has been improved. Support has been added for hashed CERTREQ and traffic statistics. New Features in TNC The Trusted Network Connect ( TNC ) Architecture, provided by the strongimcv package, has been updated and is now based on strongSwan 5.2.0 . The following new features and improvements have been added to the TNC : The PT-EAP transport protocol ( RFC 7171 ) for Trusted Network Connect has been added. The Attestation Integrity Measurement Collector ( IMC )/ Integrity Measurement Verifier ( IMV ) pair now supports the IMA-NG measurement format. The Attestation IMV support has been improved by implementing a new TPMRA work item. Support has been added for a JSON-based REST API with SWID IMV. The SWID IMC can now extract all installed packages from the dpkg , rpm , or pacman package managers using the swidGenerator , which generates SWID tags according to the new ISO/IEC 19770-2:2014 standard. The libtls TLS 1.2 implementation as used by EAP-(T)TLS and other protocols has been extended by AEAD mode support, currently limited to AES-GCM . Improved ( IMV ) support for sharing access requestor ID, device ID, and product information of an access requestor via a common imv_session object. Several bugs have been fixed in existing IF-TNCCS ( PB-TNC , IF-M ( PA-TNC )) protocols, and in the OS IMC/IMV pair. New Features in GnuTLS The GnuTLS implementation of the SSL , TLS , and DTLS protocols has been updated to version 3.3.8, which offers a number of new features and improvements: Support for DTLS 1.2 has been added. Support for Application Layer Protocol Negotiation ( ALPN ) has been added. The performance of elliptic-curve cipher suites has been improved. New cipher suites, RSA-PSK and CAMELLIA-GCM , have been added. Native support for the Trusted Platform Module ( TPM ) standard has been added. Support for PKCS#11 smart cards and hardware security modules ( HSM ) has been improved in several ways. Compliance with the FIPS 140 security standards ( Federal Information Processing Standards ) has been improved in several ways. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-security |
Chapter 5. Advisories related to this release | Chapter 5. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:4568 RHSA-2024:4569 RHSA-2024:4570 Revised on 2024-07-22 15:54:53 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.12/openjdk-17012-advisory_openjdk |
Networking | Networking OpenShift Container Platform 4.13 Configuring and managing cluster networking Red Hat OpenShift Documentation Team | [
"ssh -i <ssh-key-path> core@<master-hostname>",
"oc get -n openshift-network-operator deployment/network-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m",
"oc get clusteroperator/network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m",
"oc describe network.config/cluster",
"Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none>",
"oc describe clusteroperators/network",
"oc logs --namespace=openshift-network-operator deployment/network-operator",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s",
"oc get -n openshift-dns-operator deployment/dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h",
"oc get clusteroperator/dns",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m",
"patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'",
"oc edit dns.operator/default",
"spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc edit dns.operator/default",
"spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1",
"oc describe dns.operator/default",
"Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2",
"oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'",
"[172.30.0.0/16]",
"oc edit dns.operator/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: policy: Random 3 upstreams: 4 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 5 policy: Random 6 upstreams: 7 - type: SystemResolvConf 8 - type: Network address: 1.2.3.4 9 port: 53 10",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: example-server 1 zones: 2 - example.com forwardPlugin: transportConfig: transport: TLS 3 tls: caBundle: name: mycacert serverName: dnstls.example.com 4 policy: Random 5 upstreams: 6 - 1.1.1.1 - 2.2.2.2:5353 upstreamResolvers: 7 transportConfig: transport: TLS tls: caBundle: name: mycacert serverName: dnstls.example.com upstreams: - type: Network 8 address: 1.2.3.4 9 port: 53 10",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf 1.2.3.4:53 { policy Random } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns",
"oc describe clusteroperators/dns",
"oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"logLevel\":\"Trace\"}}' --type=merge",
"oc get configmap/dns-default -n openshift-dns -o yaml",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Debug\"}}' --type=merge",
"oc patch dnses.operator.openshift.io/default -p '{\"spec\":{\"operatorLogLevel\":\"Trace\"}}' --type=merge",
"oc edit dns.operator.openshift.io/default",
"apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: cache: positiveTTL: 1h 1 negativeTTL: 0.5h10m 2",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com",
"nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists",
"httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE",
"httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"Issuer: C=US, O=Example Inc, CN=Example Global G2 TLS RSA SHA256 2020 CA1 Subject: SOME SIGNED CERT X509v3 CRL Distribution Points: Full Name: URI:http://crl.example.com/example.crl",
"oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt \\ 1 -n openshift-config",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"",
"openssl x509 -in custom-cert.pem -noout -subject subject= /CN=example.com/ST=NC/C=US/O=Security/OU=OpenShift",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/default",
"oc describe clusteroperators/ingress",
"oc logs --namespace=openshift-ingress-operator deployments/ingress-operator -c <container_name>",
"oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>",
"oc --namespace openshift-ingress-operator get ingresscontrollers",
"NAME AGE default 10m",
"oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key",
"oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default",
"oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'",
"echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate",
"subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT",
"oc create -n openshift-ingress-operator serviceaccount thanos && oc describe -n openshift-ingress-operator serviceaccount thanos",
"Name: thanos Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-kfvf2 Mountable secrets: thanos-dockercfg-kfvf2 Tokens: thanos-token-c422q Events: <none>",
"oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: thanos-token namespace: openshift-ingress-operator annotations: kubernetes.io/service-account.name: thanos type: kubernetes.io/service-account-token EOF",
"secret=USD(oc get secret -n openshift-ingress-operator | grep thanos-token | head -n 1 | awk '{ print USD1 }')",
"oc process TOKEN=\"USDsecret\" -f - <<EOF | oc apply -n openshift-ingress-operator -f - apiVersion: template.openshift.io/v1 kind: Template parameters: - name: TOKEN objects: - apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: - parameter: bearerToken name: \\USD{TOKEN} key: token - parameter: ca name: \\USD{TOKEN} key: ca.crt EOF",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader namespace: openshift-ingress-operator rules: - apiGroups: - \"\" resources: - pods - nodes verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get",
"oc apply -f thanos-metrics-reader.yaml",
"oc adm policy -n openshift-ingress-operator add-role-to-user thanos-metrics-reader -z thanos --role-namespace=openshift-ingress-operator",
"oc adm policy -n openshift-ingress-operator add-cluster-role-to-user cluster-monitoring-view -z thanos",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: ingress-scaler namespace: openshift-ingress-operator spec: scaleTargetRef: 1 apiVersion: operator.openshift.io/v1 kind: IngressController name: default envSourceContainerName: ingress-operator minReplicaCount: 1 maxReplicaCount: 20 2 cooldownPeriod: 1 pollingInterval: 1 triggers: - type: prometheus metricType: AverageValue metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 3 namespace: openshift-ingress-operator 4 metricName: 'kube-node-role' threshold: '1' query: 'sum(kube_node_role{role=\"worker\",service=\"kube-state-metrics\"})' 5 authModes: \"bearer\" authenticationRef: name: keda-trigger-auth-prometheus",
"oc apply -f ingress-autoscaler.yaml",
"oc get -n openshift-ingress-operator ingresscontroller/default -o yaml | grep replicas:",
"replicas: 3",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-7b5df44ff-l9pmm 2/2 Running 0 17h router-default-7b5df44ff-s5sl5 2/2 Running 0 3d22h router-default-7b5df44ff-wwsth 2/2 Running 0 66s",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"2",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge",
"ingresscontroller.operator.openshift.io/default patched",
"oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'",
"3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container",
"oc -n openshift-ingress logs deployment.apps/router-default -c logs",
"2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3",
"oc create -f <name>-ingress-controller.yaml 1",
"oc --all-namespaces=true get ingresscontrollers",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService",
"oc -n openshift-ingress edit svc/router-default -o yaml",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"healthCheckInterval\": \"8s\"}}}'",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"oc edit IngressController",
"spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed",
"oc edit IngressController",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append",
"oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true",
"oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"",
"oc -n openshift-ingress-operator edit ingresscontroller/default",
"spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork",
"spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService",
"oc edit ingresses.config/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2",
"oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed",
"oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: <application_name> namespace: <application_name>",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application",
"oc edit -n openshift-ingress-operator ingresscontrollers/default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpCompression: mimeTypes: - \"text/html\" - \"text/css; charset=utf-8\" - \"application/json\"",
"oc get pods -n openshift-ingress",
"NAME READY STATUS RESTARTS AGE router-default-76bfffb66c-46qwp 1/1 Running 0 11h",
"oc rsh <router_pod_name> cat metrics-auth/statsUsername",
"oc rsh <router_pod_name> cat metrics-auth/statsPassword",
"oc describe pod <router_pod>",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"curl -u user:password https://<router_IP>:<stats_port>/metrics -k",
"curl -u <user>:<password> http://<router_IP>:<stats_port>/metrics",
"HELP haproxy_backend_connections_total Total number of connections. TYPE haproxy_backend_connections_total gauge haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route-alt\"} 0 haproxy_backend_connections_total{backend=\"http\",namespace=\"default\",route=\"hello-route01\"} 0 HELP haproxy_exporter_server_threshold Number of servers tracked and the current threshold value. TYPE haproxy_exporter_server_threshold gauge haproxy_exporter_server_threshold{type=\"current\"} 11 haproxy_exporter_server_threshold{type=\"limit\"} 500 HELP haproxy_frontend_bytes_in_total Current total of incoming bytes. TYPE haproxy_frontend_bytes_in_total gauge haproxy_frontend_bytes_in_total{frontend=\"fe_no_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"fe_sni\"} 0 haproxy_frontend_bytes_in_total{frontend=\"public\"} 119070 HELP haproxy_server_bytes_in_total Current total of incoming bytes. TYPE haproxy_server_bytes_in_total gauge haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_no_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"\",pod=\"\",route=\"\",server=\"fe_sni\",service=\"\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"docker-registry-5-nk5fz\",route=\"docker-registry\",server=\"10.130.0.89:5000\",service=\"docker-registry\"} 0 haproxy_server_bytes_in_total{namespace=\"default\",pod=\"hello-rc-vkjqx\",route=\"hello-route\",server=\"10.130.0.90:8080\",service=\"hello-svc-1\"} 0",
"http://<user>:<password>@<router_IP>:<stats_port>",
"http://<user>:<password>@<router_ip>:1936/metrics;csv",
"oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http",
"oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge",
"oc get cm default-errorpages -n openshift-ingress",
"NAME DATA AGE default-errorpages 2 25s 1",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http",
"oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http",
"oc new-project test-ingress",
"oc new-app django-psql-example",
"curl -vk <route_hostname>",
"curl -vk <route_hostname>",
"oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"maxConnections\": 7500}}}'",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/enforce-version: v1.24 name: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ingress-node-firewall-operators namespace: openshift-ingress-node-firewall EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ingress-node-firewall-sub namespace: openshift-ingress-node-firewall spec: name: ingress-node-firewall channel: stable source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get ip -n openshift-ingress-node-firewall",
"NAME CSV APPROVAL APPROVED install-5cvnz ingress-node-firewall.4.13.0-202211122336 Automatic true",
"oc get csv -n openshift-ingress-node-firewall",
"NAME DISPLAY VERSION REPLACES PHASE ingress-node-firewall.4.13.0-202211122336 Ingress Node Firewall Operator 4.13.0-202211122336 ingress-node-firewall.4.13.0-202211102047 Succeeded",
"oc annotate ns/openshift-ingress-node-firewall workload.openshift.io/allowed=management",
"oc apply -f rule.yaml",
"spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewallConfig metadata: name: ingressnodefirewallconfig namespace: openshift-ingress-node-firewall spec: nodeSelector: node-role.kubernetes.io/worker: \"\"",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall spec: interfaces: - eth0 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 1 ingress: - sourceCIDRs: - 172.16.0.0/12 rules: - order: 10 protocolConfig: protocol: ICMP icmp: icmpType: 8 #ICMP Echo request action: Deny - order: 20 protocolConfig: protocol: TCP tcp: ports: \"8000-9000\" action: Deny - sourceCIDRs: - fc00:f853:ccd:e793::0/64 rules: - order: 10 protocolConfig: protocol: ICMPv6 icmpv6: icmpType: 128 #ICMPV6 Echo request action: Deny",
"apiVersion: ingressnodefirewall.openshift.io/v1alpha1 kind: IngressNodeFirewall metadata: name: ingressnodefirewall-zero-trust spec: interfaces: - eth1 1 nodeSelector: matchLabels: <ingress_firewall_label_name>: <label_value> 2 ingress: - sourceCIDRs: - 0.0.0.0/0 3 rules: - order: 10 protocolConfig: protocol: TCP tcp: ports: 22 action: Allow - order: 20 action: Deny 4",
"oc get ingressnodefirewall",
"oc get <resource> <name> -o yaml",
"oc get crds | grep ingressnodefirewall",
"NAME READY UP-TO-DATE AVAILABLE AGE ingressnodefirewallconfigs.ingressnodefirewall.openshift.io 2022-08-25T10:03:01Z ingressnodefirewallnodestates.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z ingressnodefirewalls.ingressnodefirewall.openshift.io 2022-08-25T10:03:00Z",
"oc get pods -n openshift-ingress-node-firewall",
"NAME READY STATUS RESTARTS AGE ingress-node-firewall-controller-manager 2/2 Running 0 5d21h ingress-node-firewall-daemon-pqx56 3/3 Running 0 5d21h",
"oc adm must-gather - gather_ingress_node_firewall",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4",
"apply -f <name>.yaml 1",
"SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath=\"{.status.endpointPublishingStrategy.loadBalancer.scope}\") -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"dnsManagementPolicy\":\"Unmanaged\", \"scope\":\"USD{SCOPE}\"}}}}'",
"oc get podnetworkconnectivitycheck -n openshift-network-diagnostics",
"NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m",
"oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml",
"apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"",
"oc describe network.config cluster",
"Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OpenShiftSDN Service Network: 10.217.4.0/23",
"dhcp-option-force=26,<mtu>",
"oc debug node/<node_name> -- chroot /host ip route list match 0.0.0.0/0 | awk '{print USD5 }'",
"oc debug node/<node_name> -- chroot /host nmcli -g connection.interface-name c show ovs-if-phys0",
"[connection-<interface>-mtu] match-device=interface-name:<interface> ethernet.mtu=<mtu>",
"variant: openshift version: 4.13.0 metadata: name: 01-control-plane-interface labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"variant: openshift version: 4.13.0 metadata: name: 01-worker-interface labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/NetworkManager/conf.d/99-<interface>-mtu.conf 1 contents: local: <interface>-mtu.conf 2 mode: 0600",
"for manifest in control-plane-interface worker-interface; do butane --files-dir . USDmanifest.bu > USDmanifest.yaml done",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/mtu-migration.sh",
"for manifest in control-plane-interface worker-interface; do oc create -f USDmanifest.yaml done",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep path:",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"openshiftSDNConfig\": { \"mtu\": <mtu> }}}}'",
"oc get mcp",
"oc describe network.config cluster",
"oc get nodes",
"oc debug node/<node> -- chroot /host ip address show <interface>",
"ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8051",
"oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"",
"network.config.openshift.io/cluster patched",
"oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'",
"\"service-node-port-range\":[\"30000-33000\"]",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'",
"network.config.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]",
"oc create sa ipfailover",
"oc adm policy add-scc-to-user privileged -z ipfailover",
"oc adm policy add-scc-to-user hostnetwork -z ipfailover",
"openstack port show <cluster_name> -c allowed_address_pairs",
"*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'",
"openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb",
"apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: \"1.1.1.1\"",
"apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.13 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13",
"#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0",
"oc create configmap mycustomcheck --from-file=mycheckscript.sh",
"oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh",
"oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'",
"oc edit deploy ipfailover-keepalived",
"spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume",
"oc edit deploy ipfailover-keepalived",
"spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300",
"spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"",
"oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"",
"Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck",
"oc delete configmap <configmap_name>",
"oc get deployment -l ipfailover",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d",
"oc delete deployment <ipfailover_deployment_name>",
"oc delete sa ipfailover",
"apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.13 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never",
"oc create -f remove-ipfailover-job.yaml",
"job.batch/remove-ipfailover-2h8dm created",
"oc logs job/remove-ipfailover-2h8dm",
"remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <name> 1 namespace: default 2 spec: config: '{ \"cniVersion\": \"0.4.0\", 3 \"name\": \"<name>\", 4 \"plugins\": [{ \"type\": \"<main_CNI_plugin>\" 5 }, { \"type\": \"tuning\", 6 \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" 7 } } ] }",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } } ] }'",
"oc apply -f tuning-example.yaml",
"networkattachmentdefinition.k8.cni.cncf.io/tuningnad created",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 2 runAsGroup: 3000 3 allowPrivilegeEscalation: false 4 capabilities: 5 drop: [\"ALL\"] securityContext: runAsNonRoot: true 6 seccompProfile: 7 type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh tunepod",
"sh-4.4# sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1",
"apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP",
"apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp",
"oc create -f load-sctp-module.yaml",
"oc get nodes",
"apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP",
"oc create -f sctp-server.yaml",
"apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102",
"oc create -f sctp-service.yaml",
"apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]",
"oc apply -f sctp-client.yaml",
"oc rsh sctpserver",
"nc -l 30102 --sctp",
"oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'",
"oc rsh sctpclient",
"nc <cluster_IP> 30102 --sctp",
"apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\"",
"oc create -f ptp-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp",
"oc create -f ptp-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f ptp-sub.yaml",
"oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase 4.13.0-202301261535 Succeeded",
"oc get NodePtpDevice -n openshift-ptp -o yaml",
"apiVersion: v1 items: - apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2022-01-27T15:16:28Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp resourceVersion: \"6538103\" uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a spec: {} status: devices: 2 - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster-clock namespace: openshift-ptp annotations: {} spec: profile: - name: grandmaster-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: grandmaster-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: grandmaster namespace: openshift-ptp annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: profile: - name: \"grandmaster\" ptp4lOpts: \"-2 --summary_interval -4\" phc2sysOpts: -r -u 0 -m -O -37 -N 8 -R 16 -s USDiface_master -n 24 ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" plugins: e810: enableDefaultConfig: true ts2phcOpts: \" \" ts2phcConf: | [nmea] ts2phc.master 1 [global] use_syslog 0 verbose 1 logging_level 7 ts2phc.pulsewidth 100000000 ts2phc.nmea_serialport USDgnss_serialport leapfile /usr/share/zoneinfo/leap-seconds.list [USDiface_master] ts2phc.extts_polarity rising ts2phc.extts_correction 0 ptp4lConf: | [USDiface_master] masterOnly 1 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 6 clockAccuracy 0x27 offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval 0 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0x20 recommend: - profile: \"grandmaster\" priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f grandmaster-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com",
"oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container",
"ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1 ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1 ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504 phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: ordinary-clock # The interface name is hardware-specific interface: USDinterface ptp4lOpts: \"-2 -s\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 255 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 7 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: ordinary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f ordinary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock namespace: openshift-ptp annotations: {} spec: profile: - name: boundary-clock ptp4lOpts: \"-2\" phc2sysOpts: \"-a -r -n 24\" ptpSchedulingPolicy: SCHED_FIFO ptpSchedulingPriority: 10 ptpSettings: logReduce: \"true\" ptp4lConf: | # The interface name is hardware-specific [USDiface_slave] masterOnly 0 [USDiface_master_1] masterOnly 1 [USDiface_master_2] masterOnly 1 [USDiface_master_3] masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval -4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 50 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 clock_class_threshold 135 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 recommend: - profile: boundary-clock priority: 4 match: - nodeLabel: \"node-role.kubernetes.io/USDmcp\"",
"oc create -f boundary-clock-ptp-config.yaml",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com",
"oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container",
"I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24 I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic1 namespace: openshift-ptp spec: profile: - name: \"profile1\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens5f1] masterOnly 1 [ens5f0] masterOnly 0 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config-nic2 namespace: openshift-ptp spec: profile: - name: \"profile2\" ptp4lOpts: \"-2 --summary_interval -4\" ptp4lConf: | 1 [ens7f1] masterOnly 1 [ens7f0] masterOnly 0",
"oc create -f boundary-clock-ptp-config-nic1.yaml",
"oc create -f boundary-clock-ptp-config-nic2.yaml",
"oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container",
"ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519 ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533 phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt",
"I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m",
"oc edit PtpConfig -n openshift-ptp",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSettings: logReduce: \"true\"",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep \"master offset\" 1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io",
"NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d",
"oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml",
"apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1",
"oc get pods -n openshift-ptp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com",
"oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>",
"pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'",
"sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2",
"oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.13",
"apiVersion: ptp.openshift.io/v1 kind: PtpOperatorConfig metadata: name: default namespace: openshift-ptp spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1",
"oc apply -f ptp-operatorconfig.yaml",
"spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100",
"containers: - name: cloud-event-sidecar image: cloud-event-sidecar args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" 1 - \"--api-port=8089\"",
"apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP",
"oc get pods -n amq-interconnect",
"NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h",
"oc get pods -n openshift-ptp",
"NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h",
"[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]",
"{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"status\": \"deleted all subscriptions\" }",
"{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }",
"{ \"status\": \"OK\" }",
"OK",
"[ { \"id\": \"0fa415ae-a3cf-4299-876a-589438bacf75\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75\", \"resource\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\" }, { \"id\": \"28cd82df-8436-4f50-bbd9-7a9742828a71\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change\" }, { \"id\": \"44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"endpointUri\": \"http://localhost:9085/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677\", \"resource\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\" } ]",
"oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy",
"{ \"id\":\"c8a784d1-5f4a-4c16-9a81-a3b4313affe5\", \"type\":\"event.sync.sync-status.os-clock-sync-state-change\", \"source\":\"/cluster/compute-1.example.com/ptp/CLOCK_REALTIME\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.906277159Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/sync-status/os-clock-sync-state\", \"dataType\":\"notification\", \"valueType\":\"enumeration\", \"value\":\"LOCKED\" }, { \"resource\":\"/sync/sync-status/os-clock-sync-state\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"-53\" } ] } }",
"{ \"id\":\"69eddb52-1650-4e56-b325-86d44688d02b\", \"type\":\"event.sync.ptp-status.ptp-clock-class-change\", \"source\":\"/cluster/compute-1.example.com/ptp/ens2fx/master\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.147100033Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/ptp-status/ptp-clock-class-change\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"135\" } ] } }",
"{ \"id\":\"305ec18b-1472-47b3-aadd-8f37933249a9\", \"type\":\"event.sync.ptp-status.ptp-state-change\", \"source\":\"/cluster/compute-1.example.com/ptp/ens2fx/master\", \"dataContentType\":\"application/json\", \"time\":\"2022-05-06T15:31:23.467684081Z\", \"data\":{ \"version\":\"v1\", \"values\":[ { \"resource\":\"/sync/ptp-status/lock-state\", \"dataType\":\"notification\", \"valueType\":\"enumeration\", \"value\":\"LOCKED\" }, { \"resource\":\"/sync/ptp-status/lock-state\", \"dataType\":\"metric\", \"valueType\":\"decimal64.3\", \"value\":\"62\" } ] } }",
"{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }",
"{ \"specversion\": \"0.3\", \"id\": \"4f51fe99-feaa-4e66-9112-66c5c9b9afcb\", \"source\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"type\": \"event.sync.sync-status.os-clock-sync-state-change\", \"subject\": \"/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state\", \"datacontenttype\": \"application/json\", \"time\": \"2022-11-29T17:44:22.202Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/CLOCK_REALTIME\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"27\" } ] } }",
"{ \"id\": \"064c9e67-5ad4-4afb-98ff-189c6aa9c205\", \"type\": \"event.sync.ptp-status.ptp-clock-class-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:56.785673989Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"165\" } ] } }",
"curl http://<node_name>:9091/metrics",
"HELP openshift_ptp_clock_state 0 = FREERUN, 1 = LOCKED, 2 = HOLDOVER TYPE openshift_ptp_clock_state gauge openshift_ptp_clock_state{iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 openshift_ptp_clock_state{iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 1 HELP openshift_ptp_delay_ns TYPE openshift_ptp_delay_ns gauge openshift_ptp_delay_ns{from=\"master\",iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 842 openshift_ptp_delay_ns{from=\"master\",iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 480 openshift_ptp_delay_ns{from=\"master\",iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 584 openshift_ptp_delay_ns{from=\"master\",iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 482 openshift_ptp_delay_ns{from=\"phc\",iface=\"CLOCK_REALTIME\",node=\"compute-1.example.com\",process=\"phc2sys\"} 547 HELP openshift_ptp_offset_ns TYPE openshift_ptp_offset_ns gauge openshift_ptp_offset_ns{from=\"master\",iface=\"ens1fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -2 openshift_ptp_offset_ns{from=\"master\",iface=\"ens3fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -44 openshift_ptp_offset_ns{from=\"master\",iface=\"ens5fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} -8 openshift_ptp_offset_ns{from=\"master\",iface=\"ens7fx\",node=\"compute-1.example.com\",process=\"ptp4l\"} 3 openshift_ptp_offset_ns{from=\"phc\",iface=\"CLOCK_REALTIME\",node=\"compute-1.example.com\",process=\"phc2sys\"} 12",
"func server() { http.HandleFunc(\"/event\", getEvent) http.ListenAndServe(\"localhost:8989\", nil) } func getEvent(w http.ResponseWriter, req *http.Request) { defer req.Body.Close() bodyBytes, err := io.ReadAll(req.Body) if err != nil { log.Errorf(\"error reading event %v\", err) } e := string(bodyBytes) if e != \"\" { processEvent(bodyBytes) log.Infof(\"received event %s\", string(bodyBytes)) } else { w.WriteHeader(http.StatusNoContent) } }",
"import ( \"github.com/redhat-cne/sdk-go/pkg/pubsub\" \"github.com/redhat-cne/sdk-go/pkg/types\" v1pubsub \"github.com/redhat-cne/sdk-go/v1/pubsub\" ) // Subscribe to PTP events using REST API s1,_:=createsubscription(\"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\") 1 s2,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change\") s3,_:=createsubscription(\"/cluster/node/<node_name>/sync/ptp-status/lock-state\") // Create PTP event subscriptions POST func createSubscription(resourceAddress string) (sub pubsub.PubSub, err error) { var status int apiPath:= \"/api/ocloudNotifications/v1/\" localAPIAddr:=localhost:8989 // vDU service API address apiAddr:= \"localhost:8089\" // event framework API address subURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: apiAddr Path: fmt.Sprintf(\"%s%s\", apiPath, \"subscriptions\")}} endpointURL := &types.URI{URL: url.URL{Scheme: \"http\", Host: localAPIAddr, Path: \"event\"}} sub = v1pubsub.NewPubSub(endpointURL, resourceAddress) var subB []byte if subB, err = json.Marshal(&sub); err == nil { rc := restclient.New() if status, subB = rc.PostWithReturn(subURL, subB); status != http.StatusCreated { err = fmt.Errorf(\"error in subscription creation api at %s, returned status %d\", subURL, status) } else { err = json.Unmarshal(subB, &sub) } } else { err = fmt.Errorf(\"failed to marshal subscription for %s\", resourceAddress) } return }",
"//Get PTP event state for the resource func getCurrentState(resource string) { //Create publisher url := &types.URI{URL: url.URL{Scheme: \"http\", Host: localhost:8989, Path: fmt.SPrintf(\"/api/ocloudNotifications/v1/%s/CurrentState\",resource}} rc := restclient.New() status, event := rc.Get(url) if status != http.StatusOK { log.Errorf(\"CurrentState:error %d from url %s, %s\", status, url.String(), event) } else { log.Debugf(\"Got CurrentState: %s \", event) } }",
"apiVersion: apps/v1 kind: Deployment metadata: name: event-consumer-deployment namespace: <namespace> labels: app: consumer spec: replicas: 1 selector: matchLabels: app: consumer template: metadata: labels: app: consumer spec: serviceAccountName: sidecar-consumer-sa containers: - name: event-subscriber image: event-subscriber-app - name: cloud-event-proxy-as-sidecar image: openshift4/ose-cloud-event-proxy args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043\" - \"--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043\" - \"--api-port=8089\" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: NODE_IP valueFrom: fieldRef: fieldPath: status.hostIP volumeMounts: - name: pubsubstore mountPath: /store ports: - name: metrics-port containerPort: 9091 - name: sub-port containerPort: 9043 volumes: - name: pubsubstore emptyDir: {}",
"apiVersion: apps/v1 kind: Deployment metadata: name: cloud-event-proxy-sidecar namespace: cloud-events labels: app: cloud-event-proxy spec: selector: matchLabels: app: cloud-event-proxy template: metadata: labels: app: cloud-event-proxy spec: nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: cloud-event-sidecar image: openshift4/ose-cloud-event-proxy args: - \"--metrics-addr=127.0.0.1:9091\" - \"--store-path=/store\" - \"--transport-host=amqp://router.router.svc.cluster.local\" - \"--api-port=8089\" env: - name: <node_name> valueFrom: fieldRef: fieldPath: spec.nodeName - name: <node_ip> valueFrom: fieldRef: fieldPath: status.hostIP volumeMounts: - name: pubsubstore mountPath: /store ports: - name: metrics-port containerPort: 9091 - name: sub-port containerPort: 9043 volumes: - name: pubsubstore emptyDir: {}",
"apiVersion: v1 kind: Service metadata: annotations: prometheus.io/scrape: \"true\" service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret name: consumer-events-subscription-service namespace: cloud-events labels: app: consumer-service spec: ports: - name: sub-port port: 9043 selector: app: consumer clusterIP: None sessionAffinity: None type: ClusterIP",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/<node_name>/sync/ptp-status/lock-state\", }",
"{ \"id\": \"e23473d9-ba18-4f78-946e-401a0caeff90\", \"endpointUri\": \"http://localhost:8989/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90\", \"resource\": \"/cluster/node/<node_name>/sync/ptp-status/lock-state\", }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\", }",
"{ \"id\": \"e23473d9-ba18-4f78-946e-401a0caeff90\", \"endpointUri\": \"http://localhost:8989/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90\", \"resource\": \"/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state\", }",
"{ \"endpointUri\": \"http://localhost:8989/event\", \"resource\": \"/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change\", }",
"{ \"id\": \"e23473d9-ba18-4f78-946e-401a0caeff90\", \"endpointUri\": \"http://localhost:8989/event\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/e23473d9-ba18-4f78-946e-401a0caeff90\", \"resource\": \"/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change\", }",
"{ \"id\": \"c1ac3aa5-1195-4786-84f8-da0ea4462921\", \"type\": \"event.sync.ptp-status.ptp-state-change\", \"source\": \"/cluster/node/compute-1.example.com/sync/ptp-status/lock-state\", \"dataContentType\": \"application/json\", \"time\": \"2023-01-10T02:41:57.094981478Z\", \"data\": { \"version\": \"v1\", \"values\": [ { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"notification\", \"valueType\": \"enumeration\", \"value\": \"LOCKED\" }, { \"resource\": \"/cluster/node/compute-1.example.com/ens5fx/master\", \"dataType\": \"metric\", \"valueType\": \"decimal64.3\", \"value\": \"29\" } ] } }",
"oc get pods -n openshift-ptp",
"NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h",
"oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics",
"HELP cne_transport_connections_resets Metric to get number of connection resets TYPE cne_transport_connections_resets gauge cne_transport_connection_reset 1 HELP cne_transport_receiver Metric to get number of receiver created TYPE cne_transport_receiver gauge cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 2 cne_transport_receiver{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 2 HELP cne_transport_sender Metric to get number of sender created TYPE cne_transport_sender gauge cne_transport_sender{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 1 cne_transport_sender{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"} 1 HELP cne_events_ack Metric to get number of events produced TYPE cne_events_ack gauge cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_ack{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP cne_events_transport_published Metric to get number of events published by the transport TYPE cne_events_transport_published gauge cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"failed\"} 1 cne_events_transport_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_transport_received Metric to get number of events received by the transport TYPE cne_events_transport_received gauge cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 18 cne_events_transport_received{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 18 HELP cne_events_api_published Metric to get number of events published by the rest api TYPE cne_events_api_published gauge cne_events_api_published{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 19 cne_events_api_published{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"success\"} 19 HELP cne_events_received Metric to get number of events received TYPE cne_events_received gauge cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/ptp\"} 18 cne_events_received{status=\"success\",type=\"/cluster/node/compute-1.example.com/redfish/event\"} 18 HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code=\"200\"} 4 promhttp_metric_handler_requests_total{code=\"500\"} 0 promhttp_metric_handler_requests_total{code=\"503\"} 0",
"oc -n external-dns-operator get sub external-dns-operator -o yaml | yq '.status.installplan.name'",
"install-zcvlr",
"oc -n external-dns-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"Complete",
"oc get -n external-dns-operator deployment/external-dns-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE external-dns-operator 1/1 1 1 23h",
"oc logs -n external-dns-operator deployment/external-dns-operator -c external-dns-operator",
"time=\"2022-09-02T08:53:57Z\" level=error msg=\"Failure in zone test.example.io. [Id: /hostedzone/Z06988883Q0H0RL6UMXXX]\" time=\"2022-09-02T08:53:57Z\" level=error msg=\"InvalidChangeBatch: [FATAL problem: DomainLabelTooLong (Domain label is too long) encountered with 'external-dns-a-hello-openshift-aaaaaaaaaa-bbbbbbbbbb-ccccccc']\\n\\tstatus code: 400, request id: e54dfd5a-06c6-47b0-bcb9-a4f7c3a4e0c6\"",
"apiVersion: v1 kind: Namespace metadata: name: external-dns-operator",
"oc apply -f namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: external-dns-operator namespace: external-dns-operator spec: upgradeStrategy: Default targetNamespaces: - external-dns-operator",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: external-dns-operator namespace: external-dns-operator spec: channel: stable-v1 installPlanApproval: Automatic name: external-dns-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n external-dns-operator get subscription external-dns-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n external-dns-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"oc -n external-dns-operator get pod",
"NAME READY STATUS RESTARTS AGE external-dns-operator-5584585fd7-5lwqm 2/2 Running 0 11m",
"oc -n external-dns-operator get subscription",
"NAME PACKAGE SOURCE CHANNEL external-dns-operator external-dns-operator redhat-operators stable-v1",
"oc -n external-dns-operator get csv",
"NAME DISPLAY VERSION REPLACES PHASE external-dns-operator.v<1.y.z> ExternalDNS Operator <1.y.z> Succeeded",
"spec: provider: type: AWS 1 aws: credentials: name: aws-access-key 2",
"zones: - \"myzoneid\" 1",
"domains: - filterType: Include 1 matchType: Exact 2 name: \"myzonedomain1.com\" 3 - filterType: Include matchType: Pattern 4 pattern: \".*\\\\.otherzonedomain\\\\.com\" 5",
"source: 1 type: Service 2 service: serviceType: 3 - LoadBalancer - ClusterIP labelFilter: 4 matchLabels: external-dns.mydomain.org/publish: \"yes\" hostnameAnnotation: \"Allow\" 5 fqdnTemplate: - \"{{.Name}}.myzonedomain.com\" 6",
"source: type: OpenShiftRoute 1 openshiftRouteOptions: routerName: default 2 labelFilter: matchLabels: external-dns.mydomain.org/publish: \"yes\"",
"oc whoami",
"system:admin",
"export AWS_ACCESS_KEY_ID=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_access_key_id}} | base64 -d) export AWS_SECRET_ACCESS_KEY=USD(oc get secrets aws-creds -n kube-system --template={{.data.aws_secret_access_key}} | base64 -d)",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.testextdnsoperator.apacshift.support console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.testextdnsoperator.apacshift.support downloads http edge/Redirect None",
"aws route53 list-hosted-zones | grep testextdnsoperator.apacshift.support",
"HOSTEDZONES terraform /hostedzone/Z02355203TNN1XXXX1J6O testextdnsoperator.apacshift.support. 5",
"cat <<EOF | oc create -f - apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-aws 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: testextdnsoperator.apacshift.support 4 provider: type: AWS 5 source: 6 type: OpenShiftRoute 7 openshiftRouteOptions: routerName: default 8 EOF",
"aws route53 list-resource-record-sets --hosted-zone-id Z02355203TNN1XXXX1J6O --query \"ResourceRecordSets[?Type == 'CNAME']\" | grep console",
"CLIENT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d) CLIENT_SECRET=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d) RESOURCE_GROUP=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d) SUBSCRIPTION_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d) TENANT_ID=USD(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)",
"az login --service-principal -u \"USD{CLIENT_ID}\" -p \"USD{CLIENT_SECRET}\" --tenant \"USD{TENANT_ID}\"",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.azure.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.azure.example.com downloads http edge/Redirect None",
"az network dns zone list --resource-group \"USD{RESOURCE_GROUP}\"",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-azure 1 spec: zones: - \"/subscriptions/1234567890/resourceGroups/test-azure-xxxxx-rg/providers/Microsoft.Network/dnszones/test.azure.example.com\" 2 provider: type: Azure 3 source: openshiftRouteOptions: 4 routerName: default 5 type: OpenShiftRoute 6",
"az network dns record-set list -g \"USD{RESOURCE_GROUP}\" -z test.azure.example.com | grep console",
"oc get secret gcp-credentials -n kube-system --template='{{USDv := index .data \"service_account.json\"}}{{USDv}}' | base64 -d - > decoded-gcloud.json",
"export GOOGLE_CREDENTIALS=decoded-gcloud.json",
"gcloud auth activate-service-account <client_email as per decoded-gcloud.json> --key-file=decoded-gcloud.json",
"gcloud config set project <project_id as per decoded-gcloud.json>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.gcp.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.gcp.example.com downloads http edge/Redirect None",
"gcloud dns managed-zones list | grep test.gcp.example.com",
"qe-cvs4g-private-zone test.gcp.example.com",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-gcp 1 spec: domains: - filterType: Include 2 matchType: Exact 3 name: test.gcp.example.com 4 provider: type: GCP 5 source: openshiftRouteOptions: 6 routerName: default 7 type: OpenShiftRoute 8",
"gcloud dns record-sets list --zone=qe-cvs4g-private-zone | grep console",
"oc -n external-dns-operator create secret generic infoblox-credentials --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_USERNAME=<infoblox_username> --from-literal=EXTERNAL_DNS_INFOBLOX_WAPI_PASSWORD=<infoblox_password>",
"oc get routes --all-namespaces | grep console",
"openshift-console console console-openshift-console.apps.test.example.com console https reencrypt/Redirect None openshift-console downloads downloads-openshift-console.apps.test.example.com downloads http edge/Redirect None",
"apiVersion: externaldns.olm.openshift.io/v1beta1 kind: ExternalDNS metadata: name: sample-infoblox 1 spec: provider: type: Infoblox 2 infoblox: credentials: name: infoblox-credentials gridHost: USD{INFOBLOX_GRID_PUBLIC_IP} wapiPort: 443 wapiVersion: \"2.3.1\" domains: - filterType: Include matchType: Exact name: test.example.com source: type: OpenShiftRoute 3 openshiftRouteOptions: routerName: default 4",
"oc create -f external-dns-sample-infoblox.yaml",
"oc -n external-dns-operator create configmap trusted-ca",
"oc -n external-dns-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n external-dns-operator patch subscription external-dns-operator --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/config\", \"value\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}]}}]'",
"oc -n external-dns-operator exec deploy/external-dns-operator -c external-dns-operator -- printenv TRUSTED_CA_CONFIGMAP_NAME",
"trusted-ca",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-router spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" 1 podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-hostnetwork spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/host-network: \"\" podSelector: {} policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchLabels: role: frontend - from: - podSelector: matchLabels: role: backend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy spec: podSelector: {} ingress: - from: - podSelector: matchExpressions: - {key: role, operator: In, values: [frontend, backend]}",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy1 spec: podSelector: matchLabels: role: db ingress: - from: - podSelector: matchLabels: role: frontend --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy2 spec: podSelector: matchLabels: role: client ingress: - from: - podSelector: matchLabels: role: frontend",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: policy3 spec: podSelector: matchExpressions: - {key: role, operator: In, values: [db, client]} ingress: - from: - podSelector: matchLabels: role: frontend",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-traffic-pod spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default 1 spec: podSelector: {} 2 ingress: [] 3",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-external namespace: default spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"networkpolicy.networking.k8s.io/web-allow-external created",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-namespaces namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"networkpolicy.networking.k8s.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"networkpolicy.networking.k8s.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc get networkpolicy",
"oc describe networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy allow-same-namespace",
"Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress",
"oc get networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit networkpolicy <policy_name> -n <namespace>",
"oc describe networkpolicy <policy_name> -n <namespace>",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"oc delete networkpolicy <policy_name> -n <namespace>",
"networkpolicy.networking.k8s.io/default-deny deleted",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-kube-apiserver-operator spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-kube-apiserver-operator podSelector: matchLabels: app: kube-apiserver-operator policyTypes: - Ingress EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress",
"oc -n aws-load-balancer-operator get sub aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"install-zlfbt",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"Complete",
"oc get -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager",
"NAME READY UP-TO-DATE AVAILABLE AGE aws-load-balancer-operator-controller-manager 1/1 1 1 23h",
"oc logs -n aws-load-balancer-operator deployment/aws-load-balancer-operator-controller-manager -c manager",
"apiVersion: v1 kind: Namespace metadata: name: aws-load-balancer-operator",
"oc apply -f namespace.yaml",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-load-balancer-operator namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - ec2:DescribeSubnets effect: Allow resource: \"*\" - action: - ec2:CreateTags - ec2:DeleteTags effect: Allow resource: arn:aws:ec2:*:*:subnet/* - action: - ec2:DescribeVpcs effect: Allow resource: \"*\" secretRef: name: aws-load-balancer-operator namespace: aws-load-balancer-operator serviceAccountNames: - aws-load-balancer-operator-controller-manager",
"oc apply -f credentialsrequest.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: aws-lb-operatorgroup namespace: aws-load-balancer-operator spec: upgradeStrategy: Default",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: aws-load-balancer-operator namespace: aws-load-balancer-operator spec: channel: stable-v1 installPlanApproval: Automatic name: aws-load-balancer-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f subscription.yaml",
"oc -n aws-load-balancer-operator get subscription aws-load-balancer-operator --template='{{.status.installplan.name}}{{\"\\n\"}}'",
"oc -n aws-load-balancer-operator get ip <install_plan_name> --template='{{.status.phase}}{{\"\\n\"}}'",
"oc create namespace aws-load-balancer-operator",
"curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/operator-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get secret aws-load-balancer-operator --template='{{index .data \"credentials\"}}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::999999999999:role/aws-load-balancer-operator-aws-load-balancer-operator web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"oc get credentialsrequest -n openshift-cloud-credential-operator aws-load-balancer-controller-<cr-name> -o yaml > <path-to-credrequests-dir>/cr.yaml 1",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get pods NAME READY STATUS RESTARTS AGE aws-load-balancer-controller-cluster-9b766d6-gg82c 1/1 Running 0 137m aws-load-balancer-operator-controller-manager-b55ff68cc-85jzg 2/2 Running 0 3h26m",
"curl --create-dirs -o <path-to-credrequests-dir>/cr.yaml https://raw.githubusercontent.com/openshift/aws-load-balancer-operator/main/hack/controller/controller-credentials-request.yaml",
"ccoctl aws create-iam-roles --name <name> --region=<aws_region> --credentials-requests-dir=<path-to-credrequests-dir> --identity-provider-arn <oidc-arn>",
"ls manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"oc -n aws-load-balancer-operator get secret aws-load-balancer-controller-manual-cluster --template='{{index .data \"credentials\"}}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::999999999999:role/aws-load-balancer-operator-aws-load-balancer-controller web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: credentials: name: <secret-name> 3",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController 1 metadata: name: cluster 2 spec: subnetTagging: Auto 3 additionalResourceTags: 4 - key: example.org/security-scope value: staging ingressClass: alb 5 config: replicas: 2 6 enabledAddons: 7 - AWSWAFv2 8",
"oc create -f sample-aws-lb.yaml",
"apiVersion: apps/v1 kind: Deployment 1 metadata: name: <echoserver> 2 namespace: echoserver spec: selector: matchLabels: app: echoserver replicas: 3 3 template: metadata: labels: app: echoserver spec: containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080",
"apiVersion: v1 kind: Service 1 metadata: name: <echoserver> 2 namespace: echoserver spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: app: echoserver",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <name> 1 namespace: echoserver annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Exact backend: service: name: <echoserver> 2 port: number: 80",
"HOST=USD(oc get ingress -n echoserver echoserver --template='{{(index .status.loadBalancer.ingress 0).hostname}}')",
"curl USDHOST",
"apiVersion: elbv2.k8s.aws/v1beta1 1 kind: IngressClassParams metadata: name: single-lb-params 2 spec: group: name: single-lb 3",
"oc create -f sample-single-lb-params.yaml",
"apiVersion: networking.k8s.io/v1 1 kind: IngressClass metadata: name: single-lb 2 spec: controller: ingress.k8s.aws/alb 3 parameters: apiGroup: elbv2.k8s.aws 4 kind: IngressClassParams 5 name: single-lb-params 6",
"oc create -f sample-single-lb-class.yaml",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: single-lb 1",
"oc create -f sample-single-lb.yaml",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-1 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/group.order: \"1\" 3 alb.ingress.kubernetes.io/target-type: instance 4 spec: ingressClassName: single-lb 5 rules: - host: example.com 6 http: paths: - path: /blog 7 pathType: Prefix backend: service: name: example-1 8 port: number: 80 9 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-2 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"2\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: /store pathType: Prefix backend: service: name: example-2 port: number: 80 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-3 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/group.order: \"3\" alb.ingress.kubernetes.io/target-type: instance spec: ingressClassName: single-lb rules: - host: example.com http: paths: - path: / pathType: Prefix backend: service: name: example-3 port: number: 80",
"oc create -f sample-multiple-ingress.yaml",
"apiVersion: networking.olm.openshift.io/v1 kind: AWSLoadBalancerController metadata: name: cluster spec: subnetTagging: Auto ingressClass: tls-termination 1",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: <example> 1 annotations: alb.ingress.kubernetes.io/scheme: internet-facing 2 alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx 3 spec: ingressClassName: tls-termination 4 rules: - host: <example.com> 5 http: paths: - path: / pathType: Exact backend: service: name: <example-service> 6 port: number: 80",
"oc -n aws-load-balancer-operator create configmap trusted-ca",
"oc -n aws-load-balancer-operator label cm trusted-ca config.openshift.io/inject-trusted-cabundle=true",
"oc -n aws-load-balancer-operator patch subscription aws-load-balancer-operator --type='merge' -p '{\"spec\":{\"config\":{\"env\":[{\"name\":\"TRUSTED_CA_CONFIGMAP_NAME\",\"value\":\"trusted-ca\"}],\"volumes\":[{\"name\":\"trusted-ca\",\"configMap\":{\"name\":\"trusted-ca\"}}],\"volumeMounts\":[{\"name\":\"trusted-ca\",\"mountPath\":\"/etc/pki/tls/certs/albo-tls-ca-bundle.crt\",\"subPath\":\"ca-bundle.crt\"}]}}}'",
"oc -n aws-load-balancer-operator exec deploy/aws-load-balancer-operator-controller-manager -c manager -- bash -c \"ls -l /etc/pki/tls/certs/albo-tls-ca-bundle.crt; printenv TRUSTED_CA_CONFIGMAP_NAME\"",
"-rw-r--r--. 1 root 1000690000 5875 Jan 11 12:25 /etc/pki/tls/certs/albo-tls-ca-bundle.crt trusted-ca",
"oc -n aws-load-balancer-operator rollout restart deployment/aws-load-balancer-operator-controller-manager",
"openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }",
"bridge vlan add vid VLAN_ID dev DEV",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }",
"{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s",
"oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"",
"oc get all -n openshift-multus | grep whereabouts-reconciler",
"pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s",
"oc -n openshift-multus logs whereabouts-reconciler-2p7hw",
"2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success",
"oc create namespace <namespace_name>",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE test-network-1 14m",
"apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }",
"oc apply -f <file>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true",
"oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml",
"network.operator.openshift.io/cluster patched",
"touch <policy_name>.yaml",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore",
"oc apply -f <policy_name>.yaml -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"oc get multi-networkpolicy",
"oc apply -n <namespace> -f <policy_file>.yaml",
"oc edit multi-networkpolicy <policy_name> -n <namespace>",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc get multi-networkpolicy",
"oc describe multi-networkpolicy <policy_name> -n <namespace>",
"oc delete multi-networkpolicy <policy_name> -n <namespace>",
"multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> 2 spec: podSelector: {} 3 ingress: [] 4",
"oc apply -f deny-by-default.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}",
"oc apply -f web-allow-external.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2",
"oc apply -f web-allow-all-namespaces.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2",
"oc apply -f web-allow-prod.yaml",
"multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc create namespace dev",
"oc label namespace/dev purpose=testing",
"oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"wget: download timed out",
"oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"oc edit pod <name>",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools",
"oc exec -it <pod_name> -- ip route",
"oc edit networks.operator.openshift.io cluster",
"name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw",
"{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }",
"oc edit pod <name>",
"apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'",
"oc exec -it <pod_name> -- ip a",
"oc delete pod <name> -n <namespace>",
"oc edit networks.operator.openshift.io cluster",
"oc get network-attachment-definitions <network-name> -o yaml",
"oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1",
"oc get network-attachment-definition --all-namespaces",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'",
"oc create -f additional-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace>",
"NAME AGE additional-network-1 14m",
"apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8",
"oc create -f pod-additional-net.yaml",
"pod/test-pod created",
"ip vrf show",
"Name Table ----------------------- vrf-1 1001",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded",
"apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"USD{OC_VERSION}\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase sriov-network-operator.4.13.0-202310121402 Succeeded",
"oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m",
"oc get pods -n openshift-sriov-network-operator",
"NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>",
"oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>",
"oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true } }'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true",
"apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subsription namespace: openshift-sriov-network-operator spec: channel: \"4.13\" name: sriov-network-operator config: nodeSelector: node-role.kubernetes.io/worker: \"\" source: s/qe-app-registry/redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-sriov-network-operator",
"NAME DISPLAY VERSION REPLACES PHASE sriov-network-operator.4.13.0-202211021237 SR-IOV Network Operator 4.13.0-202211021237 sriov-network-operator.4.13.0-202210290517 Succeeded",
"oc get pods -n openshift-sriov-network-operator",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 nicSelector: 9 vendor: \"<vendor_code>\" 10 deviceID: \"<device_id>\" 11 pfNames: [\"<pf_name>\", ...] 12 rootDevices: [\"<pci_bus_id>\", ...] 13 netFilter: \"<filter_string>\" 14 deviceType: <device_type> 15 isRdma: false 16 linkType: <link_type> 17 eSwitchMode: <mode> 18 excludeTopology: false 19",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2",
"pfNames: [\"netpf0#2-7\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci",
"ip link show <interface> 1",
"5: ens3f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 3c:fd:fe:d1:bc:01 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 5a:e7:88:25:ea:a0 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 1 link/ether 3e:1d:36:d7:3d:49 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 2 link/ether ce:09:56:97:df:f9 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 3 link/ether 5e:91:cf:88:d1:38 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off vf 4 link/ether e6:06:a1:96:2f:de brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>",
"\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }",
"oc create -f sriov-network-attachment.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE additional-sriov-network-1 14m",
"ip vrf show",
"Name Table ----------------------- red 10",
"ip link",
"5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <policy_name> namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 1 nodeSelector: kubernetes.io/hostname: <node_name> numVfs: <number_of_Vfs> nicSelector: 2 vendor: \"<vendor_ID>\" deviceID: \"<device_ID>\" deviceType: netdevice excludeTopology: true 3",
"oc create -f sriov-network-node-policy.yaml",
"sriovnetworknodepolicy.sriovnetwork.openshift.io/policy-for-numa-0 created",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-numa-0-network 1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnuma0 2 networkNamespace: <namespace> 3 ipam: |- 4 { \"type\": \"<ipam_type>\", }",
"oc create -f sriov-network.yaml",
"sriovnetwork.sriovnetwork.openshift.io/sriov-numa-0-network created",
"apiVersion: v1 kind: Pod metadata: name: <pod_name> annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"sriov-numa-0-network\", 1 } ] spec: containers: - name: <container_name> image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"oc create -f sriov-network-pod.yaml",
"pod/example-pod created",
"oc get pod <pod_name>",
"NAME READY STATUS RESTARTS AGE test-deployment-sriov-76cbbf4756-k9v72 1/1 Running 0 45h",
"oc debug pod/<pod_name>",
"chroot /host",
"lscpu | grep NUMA",
"NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18, NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,",
"cat /proc/self/status | grep Cpus",
"Cpus_allowed: aa Cpus_allowed_list: 1,3,5,7",
"cat /sys/class/net/net1/device/numa_node",
"0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7",
"{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #",
"{ \"ipam\": { \"type\": \"dhcp\" } }",
"{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }",
"oc create -f <name>.yaml",
"oc get net-attach-def -n <namespace>",
"[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1",
"metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]",
"oc create -f <name>.yaml",
"oc get pod <name> -o yaml",
"oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:",
"apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"",
"oc create -f <filename> 1",
"oc describe pod sample-pod",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/sriov1: 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/sriov1: 1 volumeMounts: - mountPath: /dev/hugepages name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 1 volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyoneflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 nodeSelector: 4 feature.node.kubernetes.io/network-sriov.capable=\"true\" priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens5\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyoneflag-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: onevalidflag 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyoneflag 3 networkNamespace: sysctl-tuning-test 4 ipam: '{ \"type\": \"static\" }' 5 capabilities: '{ \"mac\": true, \"ips\": true }' 6 metaPlugins : | 7 { \"type\": \"tuning\", \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"1\" } }",
"oc create -f sriov-network-interface-sysctl.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE onevalidflag 14m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"onevalidflag\", 1 \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv4.conf.net1.accept_redirects",
"net.ipv4.conf.net1.accept_redirects = 1",
"oc create namespace sysctl-tuning-test",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policyallflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 nodeSelector: 4 node.alpha.kubernetes-incubator.io/nfd-network-sriov.capable = `true` priority: 10 5 numVfs: 5 6 nicSelector: 7 pfNames: [\"ens1f0\"] 8 deviceType: \"netdevice\" 9 isRdma: false 10",
"oc create -f policyallflags-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: allvalidflags 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: policyallflags 3 networkNamespace: sysctl-tuning-test 4 capabilities: '{ \"mac\": true, \"ips\": true }' 5",
"oc create -f sriov-network-attachment.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-sysctl-network namespace: sysctl-tuning-test spec: config: '{ \"cniVersion\":\"0.4.0\", \"name\":\"bound-net\", \"plugins\":[ { \"type\":\"bond\", 1 \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\":{ 6 \"type\":\"static\" } }, { \"type\":\"tuning\", 7 \"capabilities\":{ \"mac\":true }, \"sysctl\":{ \"net.ipv4.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv4.conf.IFNAME.accept_source_route\": \"0\", \"net.ipv4.conf.IFNAME.disable_policy\": \"1\", \"net.ipv4.conf.IFNAME.secure_redirects\": \"0\", \"net.ipv4.conf.IFNAME.send_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_redirects\": \"0\", \"net.ipv6.conf.IFNAME.accept_source_route\": \"1\", \"net.ipv6.neigh.IFNAME.base_reachable_time_ms\": \"20000\", \"net.ipv6.neigh.IFNAME.retrans_time_ms\": \"2000\" } } ] }'",
"oc create -f sriov-bond-network-interface.yaml",
"oc get network-attachment-definitions -n <namespace> 1",
"NAME AGE bond-sysctl-network 22m allvalidflags 47m",
"apiVersion: v1 kind: Pod metadata: name: tunepod namespace: sysctl-tuning-test annotations: k8s.v1.cni.cncf.io/networks: |- [ {\"name\": \"allvalidflags\"}, 1 {\"name\": \"allvalidflags\"}, { \"name\": \"bond-sysctl-network\", \"interface\": \"bond0\", \"mac\": \"0a:56:0a:83:04:0c\", 2 \"ips\": [\"10.100.100.200/24\"] 3 } ] spec: containers: - name: podexample image: centos command: [\"/bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault",
"oc apply -f examplepod.yaml",
"oc get pod -n sysctl-tuning-test",
"NAME READY STATUS RESTARTS AGE tunepod 1/1 Running 0 47s",
"oc rsh -n sysctl-tuning-test tunepod",
"sysctl net.ipv6.neigh.bond0.base_reachable_time_ms",
"net.ipv6.neigh.bond0.base_reachable_time_ms = 20000",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example",
"apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1",
"oc create -f intel-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics",
"oc create -f intel-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f intel-dpdk-pod.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-dpdk-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-dpdk-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-dpdk-pod.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: globallyDisableIrqLoadBalancing: true cpu: isolated: 21-51,73-103 1 reserved: 0-20,52-72 2 hugepages: defaultHugepagesSize: 1G 3 pages: - count: 32 size: 1G net: userLevelNetworking: true numa: topologyPolicy: \"single-numa-node\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc create -f mlx-dpdk-perfprofile-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 1 needVhostNet: true 2 nicSelector: pfNames: [\"ens3f0\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci needVhostNet: true nicSelector: pfNames: [\"ens3f1\"] nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 10 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-1 namespace: openshift-sriov-network-operator spec: deviceType: netdevice 1 isRdma: true 2 nicSelector: rootDevices: - \"0000:5e:00.1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: dpdk-nic-2 namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: true nicSelector: rootDevices: - \"0000:5e:00.0\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numVfs: 5 priority: 99 resourceName: dpdk_nic_2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-1 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.1.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' 1 networkNamespace: dpdk-test 2 spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_1 3 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-network-2 namespace: openshift-sriov-network-operator spec: ipam: '{\"type\": \"host-local\",\"ranges\": [[{\"subnet\": \"10.0.2.0/24\"}]],\"dataDir\": \"/run/my-orchestrator/container-ipam-state-1\"}' networkNamespace: dpdk-test spoofChk: \"off\" trust: \"on\" resourceName: dpdk_nic_2",
"apiVersion: v1 kind: Namespace metadata: name: dpdk-test --- apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ 1 { \"name\": \"dpdk-network-1\", \"namespace\": \"dpdk-test\" }, { \"name\": \"dpdk-network-2\", \"namespace\": \"dpdk-test\" } ]' irq-load-balancing.crio.io: \"disable\" 2 cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" labels: app: dpdk name: testpmd namespace: dpdk-test spec: runtimeClassName: performance-performance 3 containers: - command: - /bin/bash - -c - sleep INF image: registry.redhat.io/openshift4/dpdk-base-rhel8 imagePullPolicy: Always name: dpdk resources: 4 limits: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi requests: cpu: \"16\" hugepages-1Gi: 8Gi memory: 2Gi securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE - NET_RAW - NET_ADMIN runAsUser: 0 volumeMounts: - mountPath: /mnt/huge name: hugepages terminationGracePeriodSeconds: 5 volumes: - emptyDir: medium: HugePages name: hugepages",
"#!/bin/bash set -ex export CPU=USD(cat /sys/fs/cgroup/cpuset/cpuset.cpus) echo USD{CPU} dpdk-testpmd -l USD{CPU} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_1} -a USD{PCIDEVICE_OPENSHIFT_IO_DPDK_NIC_2} -n 4 -- -i --nb-cores=15 --rxd=4096 --txd=4096 --rxq=7 --txq=7 --forward-mode=mac --eth-peer=0,50:00:00:00:00:01 --eth-peer=1,50:00:00:00:00:02",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3",
"oc create -f mlx-rdma-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics",
"oc create -f mlx-rdma-network.yaml",
"apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /mnt/huge 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages",
"oc create -f mlx-rdma-pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: testpmd-dpdk namespace: mynamespace annotations: cpu-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' openshift.io/dpdk1: 1 1 limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi openshift.io/dpdk1: 1 volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False runtimeClassName: performance-cnf-performanceprofile 2 volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: name: testpmd-sriov namespace: mynamespace annotations: k8s.v1.cni.cncf.io/networks: hwoffload1 spec: runtimeClassName: performance-cnf-performanceprofile 1 containers: - name: testpmd command: [\"sleep\", \"99999\"] image: registry.redhat.io/openshift4/dpdk-base-rhel8:v4.9 securityContext: capabilities: add: [\"IPC_LOCK\",\"SYS_ADMIN\"] privileged: true runAsUser: 0 resources: requests: memory: 1000Mi hugepages-1Gi: 1Gi cpu: '2' limits: hugepages-1Gi: 1Gi cpu: '2' memory: 1000Mi volumeMounts: - mountPath: /mnt/huge name: hugepage readOnly: False volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bond-net1 namespace: demo spec: config: '{ \"type\": \"bond\", 1 \"cniVersion\": \"0.3.1\", \"name\": \"bond-net1\", \"mode\": \"active-backup\", 2 \"failOverMac\": 1, 3 \"linksInContainer\": true, 4 \"miimon\": \"100\", \"mtu\": 1500, \"links\": [ 5 {\"name\": \"net1\"}, {\"name\": \"net2\"} ], \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } }'",
"apiVersion: v1 kind: Pod metadata: name: bondpod1 namespace: demo annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1 1 spec: containers: - name: podexample image: quay.io/openshift/origin-network-interface-bond-cni:4.11.0 command: [\"/bin/bash\", \"-c\", \"sleep INF\"]",
"oc apply -f podbonding.yaml",
"oc rsh -n demo bondpod1 sh-4.4# sh-4.4# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 3: eth0@if150: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP link/ether 62:b1:b5:c8:fb:7a brd ff:ff:ff:ff:ff:ff inet 10.244.1.122/24 brd 10.244.1.255 scope global eth0 valid_lft forever preferred_lft forever 4: net3: <BROADCAST,MULTICAST,UP,LOWER_UP400> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 1 inet 10.56.217.66/24 scope global bond0 valid_lft forever preferred_lft forever 43: net1: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 2 44: net2: <BROADCAST,MULTICAST,UP,LOWER_UP800> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether 9e:23:69:42:fb:8a brd ff:ff:ff:ff:ff:ff 3",
"annotations: k8s.v1.cni.cncf.io/networks: demo/sriovnet1, demo/sriovnet2, demo/bond-net1@bond0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default 1 namespace: openshift-sriov-network-operator spec: enableInjector: true enableOperatorWebhook: true configurationMode: \"systemd\" 2 logLevel: 2",
"oc apply -f sriovOperatorConfig.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: mcp-offloading 1 spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,mcp-offloading]} 2 nodeSelector: matchLabels: node-role.kubernetes.io/mcp-offloading: \"\" 3",
"oc create -f mcp-offloading.yaml",
"oc label node worker-2 node-role.kubernetes.io/mcp-offloading=\"\"",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 2d v1.26.0 master-1 Ready master 2d v1.26.0 master-2 Ready master 2d v1.26.0 worker-0 Ready worker 2d v1.26.0 worker-1 Ready worker 2d v1.26.0 worker-2 Ready mcp-offloading,worker 47h v1.26.0 worker-3 Ready mcp-offloading,worker 47h v1.26.0",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkPoolConfig metadata: name: sriovnetworkpoolconfig-offload namespace: openshift-sriov-network-operator spec: ovsHardwareOffloadConfig: name: mcp-offloading 1",
"oc create -f <SriovNetworkPoolConfig_name>.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-node-policy <.> namespace: openshift-sriov-network-operator spec: deviceType: netdevice <.> eSwitchMode: \"switchdev\" <.> nicSelector: deviceID: \"1019\" rootDevices: - 0000:d8:00.0 vendor: \"15b3\" pfNames: - ens8f0 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 6 priority: 5 resourceName: mlxnics",
"oc create -f sriov-node-policy.yaml",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: USD{name} namespace: openshift-sriov-network-operator spec: deviceType: switchdev isRdma: true nicSelector: netFilter: openstack/NetworkID:USD{net_id} nodeSelector: feature.node.kubernetes.io/network-sriov.capable: 'true' numVfs: 1 priority: 99 resourceName: USD{name}",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: net-attach-def <.> namespace: net-attach-def <.> annotations: k8s.v1.cni.cncf.io/resourceName: openshift.io/mlxnics <.> spec: config: '{\"cniVersion\":\"0.3.1\",\"name\":\"ovn-kubernetes\",\"type\":\"ovn-k8s-cni-overlay\",\"ipam\":{},\"dns\":{}}'",
"oc create -f net-attach-def.yaml",
"oc get net-attach-def -A",
"NAMESPACE NAME AGE net-attach-def net-attach-def 43h",
". metadata: annotations: v1.multus-cni.io/default-network: net-attach-def/net-attach-def <.>",
"oc label node <example_node_name_one> node-role.kubernetes.io/sriov=",
"oc label node <example_node_name_two> node-role.kubernetes.io/sriov=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: sriov spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,sriov]} nodeSelector: matchLabels: node-role.kubernetes.io/sriov: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: sriov name: 99-bf2-dpu spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ZmluZF9jb250YWluZXIoKSB7CiAgY3JpY3RsIHBzIC1vIGpzb24gfCBqcSAtciAnLmNvbnRhaW5lcnNbXSB8IHNlbGVjdCgubWV0YWRhdGEubmFtZT09InNyaW92LW5ldHdvcmstY29uZmlnLWRhZW1vbiIpIHwgLmlkJwp9CnVudGlsIG91dHB1dD0kKGZpbmRfY29udGFpbmVyKTsgW1sgLW4gIiRvdXRwdXQiIF1dOyBkbwogIGVjaG8gIndhaXRpbmcgZm9yIGNvbnRhaW5lciB0byBjb21lIHVwIgogIHNsZWVwIDE7CmRvbmUKISBzdWRvIGNyaWN0bCBleGVjICRvdXRwdXQgL2JpbmRhdGEvc2NyaXB0cy9iZjItc3dpdGNoLW1vZGUuc2ggIiRAIgo= mode: 0755 overwrite: true path: /etc/default/switch_in_sriov_config_daemon.sh systemd: units: - name: dpu-switch.service enabled: true contents: | [Unit] Description=Switch BlueField2 card to NIC/DPU mode RequiresMountsFor=%t/containers Wants=network.target After=network-online.target kubelet.service [Service] SuccessExitStatus=0 120 RemainAfterExit=True ExecStart=/bin/bash -c '/etc/default/switch_in_sriov_config_daemon.sh nic || shutdown -r now' 1 Type=oneshot [Install] WantedBy=multi-user.target",
"oc delete sriovnetwork -n openshift-sriov-network-operator --all",
"oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all",
"oc delete sriovibnetwork -n openshift-sriov-network-operator --all",
"oc delete crd sriovibnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io",
"oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io",
"oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io",
"oc delete crd sriovnetworks.sriovnetwork.openshift.io",
"oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io",
"oc delete mutatingwebhookconfigurations network-resources-injector-config",
"oc delete MutatingWebhookConfiguration sriov-operator-webhook-config",
"oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config",
"oc delete namespace openshift-sriov-network-operator",
"I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4",
"I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface",
"oc get all,ep,cm -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE pod/ovnkube-master-9g7zt 6/6 Running 1 (48m ago) 57m pod/ovnkube-master-lqs4v 6/6 Running 0 57m pod/ovnkube-master-vxhtq 6/6 Running 0 57m pod/ovnkube-node-9k9kc 5/5 Running 0 57m pod/ovnkube-node-jg52r 5/5 Running 0 51m pod/ovnkube-node-k8wf7 5/5 Running 0 57m pod/ovnkube-node-tlwk6 5/5 Running 0 47m pod/ovnkube-node-xsvnk 5/5 Running 0 57m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ovn-kubernetes-master ClusterIP None <none> 9102/TCP 57m service/ovn-kubernetes-node ClusterIP None <none> 9103/TCP,9105/TCP 57m service/ovnkube-db ClusterIP None <none> 9641/TCP,9642/TCP 57m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 57m daemonset.apps/ovnkube-node 5 5 5 5 5 beta.kubernetes.io/os=linux 57m NAME ENDPOINTS AGE endpoints/ovn-kubernetes-master 10.0.132.11:9102,10.0.151.18:9102,10.0.192.45:9102 57m endpoints/ovn-kubernetes-node 10.0.132.11:9105,10.0.143.72:9105,10.0.151.18:9105 + 7 more... 57m endpoints/ovnkube-db 10.0.132.11:9642,10.0.151.18:9642,10.0.192.45:9642 + 3 more... 57m NAME DATA AGE configmap/control-plane-status 1 55m configmap/kube-root-ca.crt 1 57m configmap/openshift-service-ca.crt 1 57m configmap/ovn-ca 1 57m configmap/ovnkube-config 1 57m configmap/signer-ca 1 57m",
"oc get pods ovnkube-master-9g7zt -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"northd nbdb kube-rbac-proxy sbdb ovnkube-master ovn-dbchecker",
"oc get pods ovnkube-node-jg52r -o jsonpath='{.spec.containers[*].name}' -n openshift-ovn-kubernetes",
"ovn-controller ovn-acl-logging kube-rbac-proxy kube-rbac-proxy-ovn-metrics ovnkube-node",
"oc get lease -n openshift-ovn-kubernetes",
"NAME HOLDER AGE ovn-kubernetes-master ci-ln-gz990pb-72292-rthz2-master-2 50m",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (148m ago) 149m ovnkube-master-gt4ms 6/6 Running 1 (140m ago) 147m ovnkube-master-mk6p6 6/6 Running 0 148m ovnkube-node-8qvtr 5/5 Running 0 149m ovnkube-node-fqdc9 5/5 Running 0 149m ovnkube-node-tlfwv 5/5 Running 0 149m ovnkube-node-wlwkn 5/5 Running 0 142m",
"oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnnb_db.ctl --timeout=3 cluster/status OVN_Northbound",
"Defaulted container \"northd\" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1c57 Name: OVN_Northbound Cluster ID: c48a (c48aa5c0-a704-4c77-a066-24fe99d9b338) Server ID: 1c57 (1c57b6fc-2849-49b7-8679-fbf18bafe339) Address: ssl:10.0.147.219:9643 Status: cluster member Role: follower 1 Term: 5 Leader: 2b4f 2 Vote: unknown Election timer: 10000 Log: [2, 3018] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->0000 <-8844 <-2b4f Disconnections: 0 Servers: 1c57 (1c57 at ssl:10.0.147.219:9643) (self) 8844 (8844 at ssl:10.0.163.212:9643) last msg 8928047 ms ago 2b4f (2b4f at ssl:10.0.242.240:9643) last msg 620 ms ago 3",
"oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.242.240 | grep -v ovnkube-node",
"ovnkube-master-gt4ms 6/6 Running 1 (143m ago) 150m 10.0.242.240 ip-10-0-242-240.ec2.internal <none> <none>",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl show",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd ovn-nbctl --help",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl lr-list",
"f971f1f3-5112-402f-9d1e-48f1d091ff04 (GR_ip-10-0-145-205.ec2.internal) 69c992d8-a4cf-429e-81a3-5361209ffe44 (GR_ip-10-0-147-219.ec2.internal) 7d164271-af9e-4283-b84a-48f2a44851cd (GR_ip-10-0-163-212.ec2.internal) 111052e3-c395-408b-97b2-8dd0a20a29a5 (GR_ip-10-0-165-9.ec2.internal) ed50ce33-df5d-48e8-8862-2df6a59169a0 (GR_ip-10-0-209-170.ec2.internal) f44e2a96-8d1e-4a4d-abae-ed8728ac6851 (GR_ip-10-0-242-240.ec2.internal) ef3d0057-e557-4b1a-b3c6-fcc3463790b0 (ovn_cluster_router)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl ls-list",
"82808c5c-b3bc-414a-bb59-8fec4b07eb14 (ext_ip-10-0-145-205.ec2.internal) 3d22444f-0272-4c51-afc6-de9e03db3291 (ext_ip-10-0-147-219.ec2.internal) bf73b9df-59ab-4c58-a456-ce8205b34ac5 (ext_ip-10-0-163-212.ec2.internal) bee1e8d0-ec87-45eb-b98b-63f9ec213e5e (ext_ip-10-0-165-9.ec2.internal) 812f08f2-6476-4abf-9a78-635f8516f95e (ext_ip-10-0-209-170.ec2.internal) f65e710b-32f9-482b-8eab-8d96a44799c1 (ext_ip-10-0-242-240.ec2.internal) 84dad700-afb8-4129-86f9-923a1ddeace9 (ip-10-0-145-205.ec2.internal) 1b7b448b-e36c-4ca3-9f38-4a2cf6814bfd (ip-10-0-147-219.ec2.internal) d92d1f56-2606-4f23-8b6a-4396a78951de (ip-10-0-163-212.ec2.internal) 6864a6b2-de15-4de3-92d8-f95014b6f28f (ip-10-0-165-9.ec2.internal) c26bf618-4d7e-4afd-804f-1a2cbc96ec6d (ip-10-0-209-170.ec2.internal) ab9a4526-44ed-4f82-ae1c-e20da04947d9 (ip-10-0-242-240.ec2.internal) a8588aba-21da-4276-ba0f-9d68e88911f0 (join)",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-gt4ms -c northd -- ovn-nbctl lb-list",
"UUID LB PROTO VIP IPs f0fb50f9-4968-4b55-908c-616bae4db0a2 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0dc42012-4f5b-432e-ae01-2cc4bfe81b00 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,169.254.169.2:6443,10.0.242.240:6443 f7fff5d5-5eff-4a40-98b1-3a4ba8f7f69c Service_default/ tcp 172.30.0.1:443 169.254.169.2:6443,10.0.163.212:6443,10.0.242.240:6443 12fe57a0-50a4-4a1b-ac10-5f288badee07 Service_default/ tcp 172.30.0.1:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 3f137fbf-0b78-4875-ba44-fbf89f254cf7 Service_openshif tcp 172.30.23.153:443 10.130.0.14:8443 174199fe-0562-4141-b410-12094db922a7 Service_openshif tcp 172.30.69.51:50051 10.130.0.84:50051 5ee2d4bd-c9e2-4d16-a6df-f54cd17c9ac3 Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,10.0.242.240:9001 a056ae3d-83f8-45bc-9c80-ef89bce7b162 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,10.0.242.240:6443 bac51f3d-9a6f-4f5e-ac02-28fd343a332a Service_openshif tcp 172.30.0.10:53 10.131.0.6:5353 tcp 172.30.0.10:9154 10.131.0.6:9154 48105bbc-51d7-4178-b975-417433f9c20a Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,169.254.169.2:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,169.254.169.2:9979,10.0.242.240:9979 7de2b8fc-342a-415f-ac13-1a493f4e39c0 Service_openshif tcp 172.30.53.219:443 10.128.0.7:8443 tcp 172.30.53.219:9192 10.128.0.7:9192 2cef36bc-d720-4afb-8d95-9350eff1d27a Service_openshif tcp 172.30.81.66:443 10.128.0.23:8443 365cb6fb-e15e-45a4-a55b-21868b3cf513 Service_openshif tcp 172.30.96.51:50051 10.130.0.19:50051 41691cbb-ec55-4cdb-8431-afce679c5e8d Service_openshif tcp 172.30.98.218:9099 169.254.169.2:9099 82df10ba-8143-400b-977a-8f5f416a4541 Service_openshif tcp 172.30.26.159:2379 10.0.147.219:2379,10.0.163.212:2379,169.254.169.2:2379 tcp 172.30.26.159:9979 10.0.147.219:9979,10.0.163.212:9979,169.254.169.2:9979 debe7f3a-39a8-490e-bc0a-ebbfafdffb16 Service_openshif tcp 172.30.23.244:443 10.128.0.48:8443,10.129.0.27:8443,10.130.0.45:8443 8a749239-02d9-4dc2-8737-716528e0da7b Service_openshif tcp 172.30.124.255:8443 10.128.0.14:8443 880c7c78-c790-403d-a3cb-9f06592717a3 Service_openshif tcp 172.30.0.10:53 10.130.0.20:5353 tcp 172.30.0.10:9154 10.130.0.20:9154 d2f39078-6751-4311-a161-815bbaf7f9c7 Service_openshif tcp 172.30.26.159:2379 169.254.169.2:2379,10.0.163.212:2379,10.0.242.240:2379 tcp 172.30.26.159:9979 169.254.169.2:9979,10.0.163.212:9979,10.0.242.240:9979 30948278-602b-455c-934a-28e64c46de12 Service_openshif tcp 172.30.157.35:9443 10.130.0.43:9443 2cc7e376-7c02-4a82-89e8-dfa1e23fb003 Service_openshif tcp 172.30.159.212:17698 10.128.0.48:17698,10.129.0.27:17698,10.130.0.45:17698 e7d22d35-61c2-40c2-bc30-265cff8ed18d Service_openshif tcp 172.30.143.87:9001 10.0.145.205:9001,10.0.147.219:9001,10.0.163.212:9001,10.0.165.9:9001,10.0.209.170:9001,169.254.169.2:9001 75164e75-e0c5-40fb-9636-bfdbf4223a02 Service_openshif tcp 172.30.150.68:1936 10.129.4.8:1936,10.131.0.10:1936 tcp 172.30.150.68:443 10.129.4.8:443,10.131.0.10:443 tcp 172.30.150.68:80 10.129.4.8:80,10.131.0.10:80 7bc4ee74-dccf-47e9-9149-b011f09aff39 Service_openshif tcp 172.30.164.74:443 10.0.147.219:6443,10.0.163.212:6443,169.254.169.2:6443 0db59e74-1cc6-470c-bf44-57c520e0aa8f Service_openshif tcp 10.0.163.212:31460 tcp 10.0.163.212:32361 c300e134-018c-49af-9f84-9deb1d0715f8 Service_openshif tcp 172.30.42.244:50051 10.130.0.47:50051 5e352773-429b-4881-afb3-a13b7ba8b081 Service_openshif tcp 172.30.244.66:443 10.129.0.8:8443,10.130.0.8:8443 54b82d32-1939-4465-a87d-f26321442a7a Service_openshif tcp 172.30.12.9:8443 10.128.0.35:8443",
"oc get po -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-7j97q 6/6 Running 2 (134m ago) 135m ovnkube-master-gt4ms 6/6 Running 1 (126m ago) 133m ovnkube-master-mk6p6 6/6 Running 0 134m ovnkube-node-8qvtr 5/5 Running 0 135m ovnkube-node-bqztb 5/5 Running 0 117m ovnkube-node-fqdc9 5/5 Running 0 135m ovnkube-node-tlfwv 5/5 Running 0 135m ovnkube-node-wlwkn 5/5 Running 0 128m",
"oc exec -n openshift-ovn-kubernetes ovnkube-master-7j97q -- /usr/bin/ovn-appctl -t /var/run/ovn/ovnsb_db.ctl --timeout=3 cluster/status OVN_Southbound",
"Defaulted container \"northd\" out of: northd, nbdb, kube-rbac-proxy, sbdb, ovnkube-master, ovn-dbchecker 1930 Name: OVN_Southbound Cluster ID: f772 (f77273c0-7986-42dd-bd3c-a9f18e25701f) Server ID: 1930 (1930f4b7-314b-406f-9dcb-b81fe2729ae1) Address: ssl:10.0.147.219:9644 Status: cluster member Role: follower 1 Term: 3 Leader: 7081 2 Vote: unknown Election timer: 16000 Log: [2, 2423] Entries not yet committed: 0 Entries not yet applied: 0 Connections: ->0000 ->7145 <-7081 <-7145 Disconnections: 0 Servers: 7081 (7081 at ssl:10.0.163.212:9644) last msg 59 ms ago 3 1930 (1930 at ssl:10.0.147.219:9644) (self) 7145 (7145 at ssl:10.0.242.240:9644) last msg 7871735 ms ago",
"oc get po -o wide -n openshift-ovn-kubernetes | grep 10.0.163.212 | grep -v ovnkube-node",
"ovnkube-master-mk6p6 6/6 Running 0 136m 10.0.163.212 ip-10-0-163-212.ec2.internal <none> <none>",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd -- ovn-sbctl show",
"Chassis \"8ca57b28-9834-45f0-99b0-96486c22e1be\" hostname: ip-10-0-156-16.ec2.internal Encap geneve ip: \"10.0.156.16\" options: {csum=\"true\"} Port_Binding k8s-ip-10-0-156-16.ec2.internal Port_Binding etor-GR_ip-10-0-156-16.ec2.internal Port_Binding jtor-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-ingress-canary_ingress-canary-hsblx Port_Binding rtoj-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-monitoring_prometheus-adapter-658fc5967-9l46x Port_Binding rtoe-GR_ip-10-0-156-16.ec2.internal Port_Binding openshift-multus_network-metrics-daemon-77nvz Port_Binding openshift-ingress_router-default-64fd8c67c7-df598 Port_Binding openshift-dns_dns-default-ttpcq Port_Binding openshift-monitoring_alertmanager-main-0 Port_Binding openshift-e2e-loki_loki-promtail-g2pbh Port_Binding openshift-network-diagnostics_network-check-target-m6tn4 Port_Binding openshift-monitoring_thanos-querier-75b5cf8dcb-qf8qj Port_Binding cr-rtos-ip-10-0-156-16.ec2.internal Port_Binding openshift-image-registry_image-registry-7b7bc44566-mp9b8",
"oc exec -n openshift-ovn-kubernetes -it ovnkube-master-mk6p6 -c northd -- ovn-sbctl --help",
"git clone [email protected]:openshift/network-tools.git",
"cd network-tools",
"./debug-scripts/network-tools -h",
"./debug-scripts/network-tools ovn-db-run-command ovn-nbctl lr-list",
"Leader pod is ovnkube-master-vslqm 5351ddd1-f181-4e77-afc6-b48b0a9df953 (GR_helix13.lab.eng.tlv2.redhat.com) ccf9349e-1948-4df8-954e-39fb0c2d4d06 (GR_helix14.lab.eng.tlv2.redhat.com) e426b918-75a8-4220-9e76-20b7758f92b7 (GR_hlxcl7-master-0.hlxcl7.lab.eng.tlv2.redhat.com) dded77c8-0cc3-4b99-8420-56cd2ae6a840 (GR_hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com) 4f6747e6-e7ba-4e0c-8dcd-94c8efa51798 (GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com) 52232654-336e-4952-98b9-0b8601e370b4 (ovn_cluster_router)",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=localnet",
"Leader pod is ovnkube-master-vslqm _uuid : 3de79191-cca8-4c28-be5a-a228f0f9ebfc additional_chassis : [] additional_encap : [] chassis : [] datapath : 3f1a4928-7ff5-471f-9092-fe5f5c67d15c encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_helix13.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] _uuid : dbe21daf-9594-4849-b8f0-5efbfa09a455 additional_chassis : [] additional_encap : [] chassis : [] datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : br-ex_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : [unknown] nat_addresses : [] options : {network_name=physnet} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 2 type : localnet up : false virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=l3gateway",
"Leader pod is ovnkube-master-vslqm _uuid : 9314dc80-39e1-4af7-9cc0-ae8a9708ed59 additional_chassis : [] additional_encap : [] chassis : 336a923d-99e8-4e71-89a6-12564fde5760 datapath : db2a6067-fe7c-4d11-95a7-ff2321329e11 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com mac : [\"52:54:00:3e:95:d3\"] nat_addresses : [\"52:54:00:3e:95:d3 10.46.56.77\"] options : {l3gateway-chassis=\"7eb1f1c3-87c2-4f68-8e89-60f5ca810971\", peer=rtoe-GR_hlxcl7-master-2.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] _uuid : ad7eb303-b411-4e9f-8d36-d07f1f268e27 additional_chassis : [] additional_encap : [] chassis : f41453b8-29c5-4f39-b86b-e82cf344bce4 datapath : 082e7a60-d9c7-464b-b6ec-117d3426645a encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : etor-GR_helix14.lab.eng.tlv2.redhat.com mac : [\"34:48:ed:f3:e2:2c\"] nat_addresses : [\"34:48:ed:f3:e2:2c 10.46.56.14\"] options : {l3gateway-chassis=\"2e8abe3a-cb94-4593-9037-f5f9596325e2\", peer=rtoe-GR_helix14.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 1 type : l3gateway up : true virtual_parent : [] [...]",
"./debug-scripts/network-tools ovn-db-run-command ovn-sbctl find Port_Binding type=patch",
"Leader pod is ovnkube-master-vslqm _uuid : c48b1380-ff26-4965-a644-6bd5b5946c61 additional_chassis : [] additional_encap : [] chassis : [] datapath : 72734d65-fae1-4bd9-a1ee-1bf4e085a060 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : jtor-ovn_cluster_router mac : [router] nat_addresses : [] options : {peer=rtoj-ovn_cluster_router} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] _uuid : 5df51302-f3cd-415b-a059-ac24389938f7 additional_chassis : [] additional_encap : [] chassis : [] datapath : 0551c90f-e891-4909-8e9e-acc7909e06d0 encap : [] external_ids : {} gateway_chassis : [] ha_chassis_group : [] logical_port : rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com mac : [\"0a:58:0a:82:00:01 10.130.0.1/23\"] nat_addresses : [] options : {chassis-redirect-port=cr-rtos-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com, peer=stor-hlxcl7-master-1.hlxcl7.lab.eng.tlv2.redhat.com} parent_port : [] port_security : [] requested_additional_chassis: [] requested_chassis : [] tag : [] tunnel_key : 4 type : patch up : false virtual_parent : [] [...]",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o json | jq '.items[0].spec.containers[] | .name,.readinessProbe'",
"oc get events -n openshift-ovn-kubernetes",
"oc describe pod ovnkube-master-tp2z8 -n openshift-ovn-kubernetes",
"oc get co/network -o json | jq '.status.conditions[]'",
"for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; get pods -n openshift-ovn-kubernetes USDp -o json | jq '.status.containerStatuses[] | .name, .ready'; done",
"ALERT_MANAGER=USD(oc get route alertmanager-main -n openshift-monitoring -o jsonpath='{@.spec.host}')",
"curl -s -k -H \"Authorization: Bearer USD(oc create token prometheus-k8s -n openshift-monitoring)\" https://USDALERT_MANAGER/api/v1/alerts | jq '.data[] | \"\\(.labels.severity) \\(.labels.alertname) \\(.labels.pod) \\(.labels.container) \\(.labels.endpoint) \\(.labels.instance)\"'",
"oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -s 'http://localhost:9090/api/v1/rules' | jq '.data.groups[].rules[] | select(((.name|contains(\"ovn\")) or (.name|contains(\"OVN\")) or (.name|contains(\"Ovn\")) or (.name|contains(\"North\")) or (.name|contains(\"South\"))) and .type==\"alerting\")'",
"oc logs -f <pod_name> -c <container_name> -n <namespace>",
"oc logs ovnkube-master-7h4q7 -n openshift-ovn-kubernetes",
"oc logs -f ovnkube-master-7h4q7 -n openshift-ovn-kubernetes -c ovn-dbchecker",
"for p in USD(oc get pods --selector app=ovnkube-master -n openshift-ovn-kubernetes -o jsonpath='{range.items[*]}{\" \"}{.metadata.name}'); do echo === USDp ===; for container in USD(oc get pods -n openshift-ovn-kubernetes USDp -o json | jq -r '.status.containerStatuses[] | .name');do echo ---USDcontainer---; logs -c USDcontainer USDp -n openshift-ovn-kubernetes --tail=5; done; done",
"oc logs -l app=ovnkube-master -n openshift-ovn-kubernetes --all-containers --tail 5",
"oc get po -o wide -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-master-84nc9 6/6 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none> ovnkube-master-gmlqv 6/6 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-master-nhts2 6/6 Running 1 (48m ago) 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-2cbh8 5/5 Running 0 43m 10.0.217.114 ip-10-0-217-114.ec2.internal <none> <none> ovnkube-node-6fvzl 5/5 Running 0 50m 10.0.147.31 ip-10-0-147-31.ec2.internal <none> <none> ovnkube-node-f4lzz 5/5 Running 0 24m 10.0.146.76 ip-10-0-146-76.ec2.internal <none> <none> ovnkube-node-jf67d 5/5 Running 0 50m 10.0.209.180 ip-10-0-209-180.ec2.internal <none> <none> ovnkube-node-np9mf 5/5 Running 0 40m 10.0.165.191 ip-10-0-165-191.ec2.internal <none> <none> ovnkube-node-qjldg 5/5 Running 0 50m 10.0.134.156 ip-10-0-134-156.ec2.internal <none> <none>",
"kind: ConfigMap apiVersion: v1 metadata: name: env-overrides namespace: openshift-ovn-kubernetes data: ip-10-0-217-114.ec2.internal: | 1 # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg ip-10-0-209-180.ec2.internal: | # This sets the log level for the ovn-kubernetes node process: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for ovn-controller: OVN_LOG_LEVEL=dbg _master: | 2 # This sets the log level for the ovn-kubernetes master process as well as the ovn-dbchecker: OVN_KUBE_LOG_LEVEL=5 # You might also/instead want to enable debug logging for northd, nbdb and sbdb on all masters: OVN_LOG_LEVEL=dbg",
"oc apply -n openshift-ovn-kubernetes -f env-overrides.yaml",
"configmap/env-overrides.yaml created",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ip-10-0-217-114.ec2.internal -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes --field-selector spec.nodeName=ip-10-0-209-180.ec2.internal -l app=ovnkube-node",
"oc delete pod -n openshift-ovn-kubernetes -l app=ovnkube-master",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.successes[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.failures[0]'",
"oc get podnetworkconnectivitychecks -n openshift-network-diagnostics -o json | jq '.items[]| .spec.targetEndpoint,.status.outages[0]'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"oc exec prometheus-k8s-0 -n openshift-monitoring -- promtool query instant http://localhost:9090 '{component=\"openshift-network-diagnostics\"}'",
"POD=USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-master -o name | head -1 | awk -F '/' '{print USDNF}')",
"oc cp -n openshift-ovn-kubernetes USDPOD:/usr/bin/ovnkube-trace ovnkube-trace",
"chmod +x ovnkube-trace",
"./ovnkube-trace -help",
"I0111 15:05:27.973305 204872 ovs.go:90] Maximum command line arguments set to: 191102 Usage of ./ovnkube-trace: -dst string dest: destination pod name -dst-ip string destination IP address (meant for tests to external targets) -dst-namespace string k8s namespace of dest pod (default \"default\") -dst-port string dst-port: destination port (default \"80\") -kubeconfig string absolute path to the kubeconfig file -loglevel string loglevel: klog level (default \"0\") -ovn-config-namespace string namespace used by ovn-config itself -service string service: destination service name -skip-detrace skip ovn-detrace command -src string src: source pod name -src-namespace string k8s namespace of source pod (default \"default\") -tcp use tcp transport protocol -udp use udp transport protocol",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"get pods -n openshift-dns",
"NAME READY STATUS RESTARTS AGE dns-default-467qw 2/2 Running 0 49m dns-default-6prvx 2/2 Running 0 53m dns-default-fkqr8 2/2 Running 0 53m dns-default-qv2rg 2/2 Running 0 49m dns-default-s29vr 2/2 Running 0 49m dns-default-vdsbn 2/2 Running 0 53m node-resolver-6thtt 1/1 Running 0 53m node-resolver-7ksdn 1/1 Running 0 49m node-resolver-8sthh 1/1 Running 0 53m node-resolver-c5ksw 1/1 Running 0 50m node-resolver-gbvdp 1/1 Running 0 53m node-resolver-sxhkd 1/1 Running 0 50m",
"./ovnkube-trace -src-namespace default \\ 1 -src web \\ 2 -dst-namespace openshift-dns \\ 3 -dst dns-default-467qw \\ 4 -udp -dst-port 53 \\ 5 -loglevel 0 6",
"I0116 10:19:35.601303 17900 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from web to dns-default-467qw ovn-trace destination pod to source pod indicates success from dns-default-467qw to web ovs-appctl ofproto/trace source pod to destination pod indicates success from web to dns-default-467qw ovs-appctl ofproto/trace destination pod to source pod indicates success from dns-default-467qw to web ovn-detrace source pod to destination pod indicates success from web to dns-default-467qw ovn-detrace destination pod to source pod indicates success from dns-default-467qw to web",
"./ovnkube-trace -src-namespace default -src web -dst-namespace openshift-dns -dst dns-default-467qw -udp -dst-port 53 -loglevel 2",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: default spec: podSelector: {} ingress: []",
"oc apply -f deny-by-default.yaml",
"networkpolicy.networking.k8s.io/deny-by-default created",
"oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80",
"oc create namespace prod",
"oc label namespace/prod purpose=production",
"oc run test-6459 --namespace=prod --rm -i -t --image=alpine -- sh",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"I0116 14:20:47.380775 50822 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates failure from test-6459 to web",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 2",
"ct_lb_mark /* default (use --ct to customize) */ ------------------------------------------------ 3. ls_out_acl_hint (northd.c:6092): !ct.new && ct.est && !ct.rpl && ct_mark.blocked == 0, priority 4, uuid 32d45ad4 reg0[8] = 1; reg0[10] = 1; next; 4. ls_out_acl (northd.c:6435): reg0[10] == 1 && (outport == @a16982411286042166782_ingressDefaultDeny), priority 2000, uuid f730a887 1 ct_commit { ct_mark.blocked = 1; };",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-prod namespace: default spec: podSelector: matchLabels: app: web policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production",
"oc apply -f web-allow-prod.yaml",
"./ovnkube-trace -src-namespace prod -src test-6459 -dst-namespace default -dst web -tcp -dst-port 80 -loglevel 0",
"I0116 14:25:44.055207 51695 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from test-6459 to web ovn-trace destination pod to source pod indicates success from web to test-6459 ovs-appctl ofproto/trace source pod to destination pod indicates success from test-6459 to web ovs-appctl ofproto/trace destination pod to source pod indicates success from web to test-6459 ovn-detrace source pod to destination pod indicates success from test-6459 to web ovn-detrace destination pod to source pod indicates success from web to test-6459",
"wget -qO- --timeout=2 http://web.default",
"<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"oc get Network.config.openshift.io cluster -o yaml > cluster-kuryr.yaml",
"CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\", \"v6InternalSubnet\":\"<ipv6_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 1 machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b 2 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": \"<prefix>\" } ] \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"for name in USD(openstack server list --name USD{CLUSTERID}\\* -f value -c Name); do openstack server reboot USDname; done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"python3 -m venv /tmp/venv",
"source /tmp/venv/bin/activate",
"(venv) USD pip install pip --upgrade",
"(venv) USD pip install openstacksdk==0.54.0 python-openstackclient==5.5.0 python-octaviaclient==2.3.0",
"(venv) USD CLUSTERID=USD(oc get infrastructure.config.openshift.io cluster -o=jsonpath='{.status.infrastructureName}')",
"(venv) USD CLUSTERTAG=\"openshiftClusterID=USD{CLUSTERID}\"",
"(venv) USD ROUTERID=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=\":status.routerId\"|head -n 1)",
"(venv) USD function REMFIN { local resource=USD1 local finalizer=USD2 for res in USD(oc get USDresource -A --template='{{range USDi,USDp := .items}}{{ USDp.metadata.name }}|{{ USDp.metadata.namespace }}{{\"\\n\"}}{{end}}'); do name=USD{res%%|*} ns=USD{res##*|} yaml=USD(oc get -n USDns USDresource USDname -o yaml) if echo \"USD{yaml}\" | grep -q \"USD{finalizer}\"; then echo \"USD{yaml}\" | grep -v \"USD{finalizer}\" | oc replace -n USDns USDresource USDname -f - fi done }",
"(venv) USD REMFIN services kuryr.openstack.org/service-finalizer",
"(venv) USD if USD(oc get -n openshift-kuryr service service-subnet-gateway-ip &>/dev/null); then oc -n openshift-kuryr delete service service-subnet-gateway-ip fi",
"(venv) USD for lb in USD(openstack loadbalancer list --tags USDCLUSTERTAG -f value -c id); do openstack loadbalancer delete --cascade USDlb done",
"(venv) USD REMFIN kuryrloadbalancers.openstack.org kuryr.openstack.org/kuryrloadbalancer-finalizers",
"(venv) USD oc delete namespace openshift-kuryr",
"(venv) USD openstack router remove subnet USDROUTERID USD{CLUSTERID}-kuryr-service-subnet",
"(venv) USD openstack network delete USD{CLUSTERID}-kuryr-service-network",
"(venv) USD REMFIN pods kuryr.openstack.org/pod-finalizer",
"(venv) USD REMFIN kuryrports.openstack.org kuryr.openstack.org/kuryrport-finalizer",
"(venv) USD REMFIN networkpolicy kuryr.openstack.org/networkpolicy-finalizer",
"(venv) USD REMFIN kuryrnetworkpolicies.openstack.org kuryr.openstack.org/networkpolicy-finalizer",
"(venv) USD read -ra trunks <<< USD(python -c \"import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.trunks(any_tags='USDCLUSTERTAG')]))\") && i=0 && for trunk in \"USD{trunks[@]}\"; do i=USD((i+1)) echo \"Processing trunk USDtrunk, USD{i}/USD{#trunks[@]}.\" subports=() for subport in USD(python -c \"import openstack; n = openstack.connect().network; print(' '.join([x['port_id'] for x in n.get_trunk('USDtrunk').sub_ports if 'USDCLUSTERTAG' in n.get_port(x['port_id']).tags]))\"); do subports+=(\"USDsubport\"); done args=() for sub in \"USD{subports[@]}\" ; do args+=(\"--subport USDsub\") done if [ USD{#args[@]} -gt 0 ]; then openstack network trunk unset USD{args[*]} USDtrunk fi done",
"(venv) USD mapfile -t kuryrnetworks < <(oc get kuryrnetwork -A --template='{{range USDi,USDp := .items}}{{ USDp.status.netId }}|{{ USDp.status.subnetId }}{{\"\\n\"}}{{end}}') && i=0 && for kn in \"USD{kuryrnetworks[@]}\"; do i=USD((i+1)) netID=USD{kn%%|*} subnetID=USD{kn##*|} echo \"Processing network USDnetID, USD{i}/USD{#kuryrnetworks[@]}\" # Remove all ports from the network. for port in USD(python -c \"import openstack; n = openstack.connect().network; print(' '.join([x.id for x in n.ports(network_id='USDnetID') if x.device_owner != 'network:router_interface']))\"); do ( openstack port delete USDport ) & # Only allow 20 jobs in parallel. if [[ USD(jobs -r -p | wc -l) -ge 20 ]]; then wait -n fi done wait # Remove the subnet from the router. openstack router remove subnet USDROUTERID USDsubnetID # Remove the network. openstack network delete USDnetID done",
"(venv) USD openstack security group delete USD{CLUSTERID}-kuryr-pods-security-group",
"(venv) USD for subnetpool in USD(openstack subnet pool list --tags USDCLUSTERTAG -f value -c ID); do openstack subnet pool delete USDsubnetpool done",
"(venv) USD networks=USD(oc get kuryrnetwork -A --no-headers -o custom-columns=\":status.netId\") && for existingNet in USD(openstack network list --tags USDCLUSTERTAG -f value -c ID); do if [[ USDnetworks =~ USDexistingNet ]]; then echo \"Network still exists: USDexistingNet\" fi done",
"(venv) USD for sgid in USD(openstack security group list -f value -c ID -c Description | grep 'Kuryr-Kubernetes Network Policy' | cut -f 1 -d ' '); do openstack security group delete USDsgid done",
"(venv) USD REMFIN kuryrnetworks.openstack.org kuryrnetwork.finalizers.kuryr.openstack.org",
"(venv) USD if USD(python3 -c \"import sys; import openstack; n = openstack.connect().network; r = n.get_router('USDROUTERID'); sys.exit(0) if r.description != 'Created By OpenShift Installer' else sys.exit(1)\"); then openstack router delete USDROUTERID fi",
"- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2",
"oc patch network.config.openshift.io cluster --type='json' --patch-file <file>.yaml",
"network.config.openshift.io/cluster patched",
"oc describe network",
"Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112",
"oc edit networks.config.openshift.io",
"kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }",
"<timestamp>|<message_serial>|acl_log(ovn_pinctrl0)|<severity>|name=\"<acl_name>\", verdict=\"<verdict>\", severity=\"<severity>\", direction=\"<direction>\": <flow>",
"<proto>,vlan_tci=0x0000,dl_src=<src_mac>,dl_dst=<source_mac>,nw_src=<source_ip>,nw_dst=<target_ip>,nw_tos=<tos_dscp>,nw_ecn=<tos_ecn>,nw_ttl=<ip_ttl>,nw_frag=<fragment>,tp_src=<tcp_src_port>,tp_dst=<tcp_dst_port>,tcp_flags=<tcp_flags>",
"Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"oc edit network.operator.openshift.io/cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0",
"cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF",
"namespace/verify-audit-logging created",
"cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace namespace: verify-audit-logging spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: verify-audit-logging EOF",
"networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created",
"cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF",
"for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done",
"pod/client created pod/server created",
"POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')",
"oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms",
"oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP",
"PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }",
"namespace/verify-audit-logging annotated",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done",
"Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_ingressDefaultDeny\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0",
"oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging-",
"kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null",
"namespace/verify-audit-logging annotated",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"ipsecConfig\":{ }}}}}'",
"oc get pods -l app=ovnkube-master -n openshift-ovn-kubernetes",
"NAME READY STATUS RESTARTS AGE ovnkube-master-fvtnh 6/6 Running 0 122m ovnkube-master-hsgmm 6/6 Running 0 122m ovnkube-master-qcmdc 6/6 Running 0 122m",
"oc -n openshift-ovn-kubernetes rsh ovnkube-master-<pod_number_sequence> \\ 1 ovn-nbctl --no-leader-only get nb_global . ipsec",
"oc patch networks.operator.openshift.io/cluster --type=json -p='[{\"op\":\"remove\", \"path\":\"/spec/defaultNetwork/ovnKubernetesConfig/ipsecConfig\"}]'",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovnkube-master",
"ovnkube-master-5xqbf 8/8 Running 0 28m",
"oc -n openshift-ovn-kubernetes -c nbdb rsh ovnkube-master-<pod_number_sequence> \\ 1 ovn-nbctl --no-leader-only get nb_global . ipsec",
"oc delete daemonset ovn-ipsec -n openshift-ovn-kubernetes 1",
"oc get pods -n openshift-ovn-kubernetes -l=app=ovn-ipsec",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 nodeSelector: <label_name>: <label_value> 5 ports: 6",
"ports: - port: <port> 1 protocol: <protocol> 2",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1/32 ports: - port: 80 protocol: TCP - port: 443",
"apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - to: nodeSelector: matchLabels: region: east type: Allow",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressfirewall.k8s.ovn.org/v1 created",
"oc get egressfirewall --all-namespaces",
"oc describe egressfirewall <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressfirewall",
"oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressfirewall",
"oc delete -n <project> egressfirewall <name>",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4",
"namespaceSelector: 1 matchLabels: <label_name>: <label_value>",
"podSelector: 1 matchLabels: <label_name>: <label_value>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: egressIPConfig: 1 reachabilityTotalTimeoutSeconds: 5 2 gatewayConfig: routingViaHost: false genevePort: 6081",
"oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1",
"apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>",
"apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa",
"oc apply -f <egressips_name>.yaml 1",
"egressips.k8s.ovn.org/<egressips_name> created",
"oc label ns <namespace> env=qa 1",
"oc get egressip -o yaml",
"spec: egressIPs: - 192.168.127.10 - 192.168.127.11",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> <.> spec: addresses: [ <.> { ip: \"<egress_router>\", <.> gateway: \"<egress_gateway>\" <.> } ] mode: Redirect redirect: { redirectRules: [ <.> { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, <.> protocol: <network_protocol> <.> }, ], fallbackIP: \"<egress_destination>\" <.> }",
"apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni <.>",
"oc get network-attachment-definition egress-router-cni-nad",
"NAME AGE egress-router-cni-nad 18m",
"oc get deployment egress-router-cni-deployment",
"NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m",
"oc get pods -l app=egress-router-cni",
"NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m",
"POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")",
"oc debug node/USDPOD_NODENAME",
"chroot /host",
"cat /tmp/egress-router-log",
"2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99",
"crictl ps --name egress-router-cni-pod | awk '{print USD1}'",
"CONTAINER bac9fae69ddb6",
"crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'",
"68857",
"nsenter -n -t 68857",
"ip route",
"default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1",
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056",
"oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"",
"network.operator.openshift.io/cluster patched",
"oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"",
"{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}",
"for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-node USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done",
"ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-",
"oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'",
"network.operator.openshift.io/cluster patched",
"oc patch networks.operator.openshift.io cluster --type=merge -p '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"hybridOverlayConfig\":{ \"hybridClusterNetwork\":[ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"hybridOverlayVXLANPort\": <overlay_port> } } } } }'",
"network.operator.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.defaultNetwork.ovnKubernetesConfig}\"",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\": true } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }'",
"oc get Network.config cluster -o jsonpath='{.status.migration.networkType}'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml",
"oc get Network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'",
"oc delete namespace openshift-ovn-kubernetes",
"oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml",
"#!/bin/bash if [ -n \"USDOVN_SDN_MIGRATION_TIMEOUT\" ] && [ \"USDOVN_SDN_MIGRATION_TIMEOUT\" = \"0s\" ]; then unset OVN_SDN_MIGRATION_TIMEOUT fi #loops the timeout command of the script to repeatedly check the cluster Operators until all are available. co_timeout=USD{OVN_SDN_MIGRATION_TIMEOUT:-1200s} timeout \"USDco_timeout\" bash <<EOT until oc wait co --all --for='condition=AVAILABLE=True' --timeout=10s && oc wait co --all --for='condition=PROGRESSING=False' --timeout=10s && oc wait co --all --for='condition=DEGRADED=False' --timeout=10s; do sleep 10 echo \"Some ClusterOperators Degraded=False,Progressing=True,or Available=False\"; done EOT",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{\"spec\":{\"migration\":null}}'",
"oc get nncp",
"NAME STATUS REASON bondmaster0 Available SuccessfullyConfigured",
"oc delete nncp <nncp_manifest_filename>",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\" } } }'",
"oc get mcp",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OVNKubernetes\", \"features\": { \"egressIP\": <bool>, \"egressFirewall\": <bool>, \"multicast\": <bool> } } } }'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port>, \"v4InternalSubnet\":\"<ipv4_subnet>\" }}}}'",
"oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'",
"oc get mcp",
"oc describe node | egrep \"hostname|machineconfig\"",
"kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done",
"oc get machineconfig <config_name> -o yaml | grep ExecStart",
"ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes",
"oc get pod -n openshift-machine-config-operator",
"NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h",
"oc logs <pod> -n openshift-machine-config-operator",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'",
"oc -n openshift-multus rollout status daemonset/multus",
"Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out",
"#!/bin/bash readarray -t POD_NODES <<< \"USD(oc get pod -n openshift-machine-config-operator -o wide| grep daemon|awk '{print USD1\" \"USD7}')\" for i in \"USD{POD_NODES[@]}\" do read -r POD NODE <<< \"USDi\" until oc rsh -n openshift-machine-config-operator \"USDPOD\" chroot /rootfs shutdown -r +1 do echo \"cannot reboot node USDNODE, retry\" && sleep 3 done done",
"#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done",
"oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'",
"oc get nodes",
"oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'",
"oc get co",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'",
"oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'",
"oc delete namespace openshift-sdn",
"IP capacity = public cloud default capacity - sum(current IP assignments)",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"eni-078d267045138e436\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ipv4\":14,\"ipv6\":15} } ]",
"cloud.network.openshift.io/egress-ipconfig: [ { \"interface\":\"nic0\", \"ifaddr\":{\"ipv4\":\"10.0.128.0/18\"}, \"capacity\":{\"ip\":14} } ]",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'",
"oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'",
"oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'",
"oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'",
"oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2",
"egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0",
"oc create -f <policy_name>.yaml -n <project>",
"oc create -f default.yaml -n project1",
"egressnetworkpolicy.network.openshift.io/v1 created",
"oc get egressnetworkpolicy --all-namespaces",
"oc describe egressnetworkpolicy <policy_name>",
"Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0",
"oc get -n <project> egressnetworkpolicy",
"oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml",
"oc replace -f <filename>.yaml",
"oc get -n <project> egressnetworkpolicy",
"oc delete -n <project> egressnetworkpolicy <name>",
"curl <router_service_IP> <port>",
"openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>",
"apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod",
"80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27",
"curl <router_service_IP> <port>",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-",
"!*.example.com !192.168.1.0/24 192.168.2.1 *",
"apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1",
"apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/",
"apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"",
"80 172.16.12.11 100 example.com",
"8080 192.168.60.252 80 8443 web.example.com 443",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy",
"apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy",
"oc create -f egress-router-service.yaml",
"Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27",
"oc delete configmap egress-routes --ignore-not-found",
"oc create configmap egress-routes --from-file=destination=my-egress-destination.txt",
"apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27",
"env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination",
"oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener",
"oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-",
"oc adm pod-network join-projects --to=<project1> <project2> <project3>",
"oc get netnamespaces",
"oc adm pod-network isolate-projects <project1> <project2>",
"oc adm pod-network make-projects-global <project1> <project2>",
"oc edit network.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]",
"oc get networks.operator.openshift.io -o yaml",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List",
"oc get clusteroperator network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>",
"oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt",
"secret/dest-ca-cert created",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend",
"apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253",
"{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 externalIP: policy: {}",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2",
"policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32",
"oc describe networks.config cluster",
"oc edit networks.config cluster",
"apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1",
"oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: finops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - finance - ops",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: dev-router namespace: openshift-ingress-operator spec: namespaceSelector: matchLabels: name: dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: devops-router namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: In values: - dev - ops",
"oc edit ingresscontroller -n openshift-ingress-operator default",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: namespaceSelector: matchExpressions: - key: name operator: NotIn values: - finance - ops - dev",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"cat router-internal.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> 1 nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded",
"oc apply -f router-internal.yaml",
"oc expose svc <service-name> --hostname <route-name>.apps-sharded.basedomain.example.net",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: internal namespace: openshift-ingress-operator spec: domain: example.com endpointPublishingStrategy: type: HostNetwork hostNetwork: httpPort: 80 httpsPort: 443 statsPort: 1936",
"oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"Internal\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"scope\":\"External\"}}}}'",
"oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml",
"oc -n openshift-ingress delete services/router-default",
"apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: <custom_ic_name> 1 namespace: openshift-ingress-operator spec: replicas: 1 domain: <custom_ic_domain_name> 2 nodePlacement: nodeSelector: matchLabels: <key>: <value> 3 namespaceSelector: matchLabels: <key>: <value> 4 endpointPublishingStrategy: type: NodePortService",
"oc label node <node_name> <key>=<value> 1",
"oc create -f <ingress_controller_cr>.yaml",
"oc get svc -n openshift-ingress",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d router-nodeport-custom-ic3 NodePort 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m",
"oc new-project <project_name>",
"oc label namespace <project_name> <key>=<value> 1",
"oc new-app --image=<image_name> 1",
"oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> 1",
"oc get route/hello-openshift -o json | jq '.status.ingress'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2024-05-17T18:25:41Z\", \"status\": \"True\", \"type\": \"Admitted\" } ], [ { \"host\": \"hello-openshift.nodeportsvc.ipi-cluster.example.com\", \"routerCanonicalHostname\": \"router-nodeportsvc.nodeportsvc.ipi-cluster.example.com\", \"routerName\": \"nodeportsvc\", \"wildcardPolicy\": \"None\" } ], }",
"oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"namespaceSelector\":{\"matchExpressions\":[{\"key\":\"<key>\",\"operator\":\"NotIn\",\"values\":[\"<value>]}]}}}'",
"dig +short <svc_name>-<project_name>.<custom_ic_domain_name>",
"curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> 1",
"Hello OpenShift!",
"oc adm policy add-cluster-role-to-user cluster-admin username",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc expose service nodejs-ex",
"route.route.openshift.io/nodejs-ex exposed",
"oc get route",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None",
"curl --head nodejs-ex-myproject.example.com",
"HTTP/1.1 200 OK",
"oc project project1",
"apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5",
"oc create -f <file-name>",
"oc create -f mysql-lb.yaml",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m",
"curl <public-ip>:<port>",
"curl 172.29.121.74:3306",
"mysql -h 172.30.131.89 -u admin -p",
"Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadBalancer\": {\"scope\":\"External\", \"providerParameters\":{\"type\":\"AWS\", \"aws\": {\"type\":\"Classic\", \"classicLoadBalancer\": {\"connectionIdleTimeout\":\"5m\"}}}}}}}'",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"providerParameters\":{\"aws\":{\"classicLoadBalancer\": {\"connectionIdleTimeout\":null}}}}}}}'",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: Classic type: LoadBalancerService",
"oc apply -f ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc replace --force --wait -f ingresscontroller.yml",
"oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS",
"cat ingresscontroller-aws-nlb.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB",
"oc create -f ingresscontroller-aws-nlb.yaml",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService",
"oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'",
"apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: policy: allowedCIDRs: - 192.168.123.0/28",
"oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'",
"oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'",
"\"mysql-55-rhel7\" patched",
"oc get svc",
"NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m",
"oc adm policy add-cluster-role-to-user cluster-admin <user_name>",
"oc new-project <project_name>",
"oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git",
"oc get svc -n <project_name>",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s",
"oc project <project_name>",
"oc edit svc <service_name>",
"spec: ports: - name: 8443-tcp nodePort: 30327 1 port: 8443 protocol: TCP targetPort: 8443 sessionAffinity: None type: NodePort 2",
"oc get svc -n myproject",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s",
"oc delete svc nodejs-ex",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd NodePort 172.xx.xx.xx <none> 8443:30327/TCP 109s",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"type\":\"LoadBalancerService\", \"loadbalancer\": {\"scope\":\"External\", \"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get svc router-default -n openshift-ingress -o yaml",
"apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/load-balancer-source-ranges: 192.168.0.1/32",
"oc get svc router-default -n openshift-ingress -o yaml",
"spec: loadBalancerSourceRanges: - 0.0.0.0/0",
"oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\": {\"loadBalancer\":{\"allowedSourceRanges\":[\"0.0.0.0/0\"]}}}}' 1",
"oc get ingressclass",
"oc get ingress -A",
"oc patch ingress/<ingress_name> --type=merge --patch '{\"spec\":{\"ingressClassName\":\"openshift-default\"}}'",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF",
"cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF",
"get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase kubernetes-nmstate-operator.4.13.0-202210210157 Succeeded",
"oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator",
"oc get --namespace openshift-nmstate clusterserviceversion",
"NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded",
"oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0",
"oc -n openshift-nmstate delete nmstate nmstate",
"oc delete --all deployments --namespace=openshift-nmstate",
"oc delete crd nmstates.nmstate.io",
"oc delete crd nodenetworkconfigurationenactments.nmstate.io",
"oc delete crd nodenetworkstates.nmstate.io",
"oc delete crd nodenetworkconfigurationpolicies.nmstate.io",
"oc delete namespace kubernetes-nmstate",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8",
"oc apply -f br1-eth1-policy.yaml 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251",
"desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3",
"dns-resolver: config: {} interfaces: []",
"dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true",
"dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false",
"dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true",
"dns-resolver: config: interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"cat >> /etc/named.conf <<EOF zone \"root-servers.net\" IN { type master; file \"named.localhost\"; }; EOF",
"systemctl restart named",
"journalctl -u named|grep root-servers.net",
"Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.",
"echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf",
"systemctl restart dnsmasq",
"journalctl -u dnsmasq|grep root-servers.net",
"Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net.",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}",
"oc get proxy/cluster -o yaml",
"oc get proxy/cluster -o jsonpath='{.status}'",
"{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }",
"oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)",
"oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"config.openshift.io/inject-trusted-cabundle=\"true\"",
"apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache",
"apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: ca-inject items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2",
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1",
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn",
"openstack loadbalancer list | grep amphora",
"a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora",
"openstack loadbalancer list | grep ovn",
"2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn",
"openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>",
"openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER",
"openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS",
"openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443",
"for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP",
"oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml",
"apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2",
"oc apply -f external_router.yaml",
"oc -n openshift-ingress get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h",
"openstack loadbalancer list | grep router-external",
"| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |",
"openstack floating ip list | grep 172.30.235.33",
"| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-operator 14m",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace",
"oc create -f metallb-sub.yaml",
"oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"",
"oc get installplan -n metallb-system",
"NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.13.0-nnnnnnnnnnnn Automatic true",
"oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase metallb-operator.4.13.0-nnnnnnnnnnnn Succeeded",
"cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF",
"oc get deployment -n metallb-system controller",
"NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m",
"oc get daemonset -n metallb-system speaker",
"NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: \"\" speakerTolerations: <.> - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000",
"oc apply -f myPriorityClass.yaml",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname",
"oc apply -f MetalLBPodConfig.yaml",
"oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName",
"NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority",
"oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: \"200m\" speakerConfig: resources: limits: cpu: \"300m\"",
"oc apply -f CPULimits.yaml",
"oc describe pod <pod_name>",
"oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV",
"currentCSV: metallb-operator.4.10.0-202207051316",
"oc delete subscription metallb-operator -n metallb-system",
"subscription.operators.coreos.com \"metallb-operator\" deleted",
"oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system",
"clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted",
"oc get operatorgroup -n metallb-system",
"NAME AGE metallb-system-7jc66 85m",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system",
"oc edit n metallb-system",
"operatorgroup.operators.coreos.com/metallb-system-7jc66 edited",
"oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"",
"oc get namespaces | grep metallb-system",
"metallb-system Active 31m",
"oc get metallb -n metallb-system",
"NAME AGE metallb 33m",
"oc get csv -n metallb-system",
"NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.13.0-202207051316 MetalLB Operator 4.13.0-202207051316 Succeeded",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75",
"oc apply -f ipaddresspool.yaml",
"oc describe -n metallb-system IPAddressPool doc-example",
"Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"oc apply -f ipaddresspool1.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400",
"oc apply -f ipaddresspool2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer1.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer2.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254",
"oc apply -f bfdprofile.yaml",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8",
"oc apply -f <service_name>.yaml",
"service/<service_name> created",
"oc describe service <service_name>",
"Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc replace -f setdebugloglevel.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s",
"oc logs -n metallb-system speaker-7m4qw -c speaker",
"{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}",
"oc logs -n metallb-system speaker-7m4qw -c frr",
"Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"",
"Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 4 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full 5 neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 6 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 7 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full 8 transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"",
"IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 3 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 4 Total number of neighbors 2",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"",
"BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 <.> Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"",
"Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>",
"pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0",
"(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/networking/index |
Chapter 4. New features | Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.5. 4.1. Installer and image creation RHEL for Edge now supports a Simplified Installer This enhancement enables Image Builder to build the RHEL for Edge Simplified Installer ( edge-simplified-installer ) and RHEL for Edge Raw Images ( edge-raw-image ). RHEL for Edge Simplified Installer enables you to specify a new blueprint option, installation_device and thus, perform an unattended installation to a device. To create the raw image, you must provide an existing OSTree commit. It results in a raw image with the existing commit deployed in it. The installer will use this raw image to the specified installation device. Additionally, you can also use Image Builder to build RHEL for Edge Raw Images. These are compressed raw images that contain a partition layout with an existing deployed OSTree commit in it. You can install the RHEL for Edge Raw Images to flash on a hard drive or booted in a virtual machine. ( BZ#1937854 ) Warnings for deprecated kernel boot arguments Anaconda boot arguments without the inst. prefix (for example, ks , stage2 , repo and so on) are deprecated starting RHEL7. These arguments will be removed in the major RHEL release. With this release, appropriate warning messages are displayed when the boot arguments are used without the inst prefix. The warning messages are displayed in dracut when booting the installation and also when the installation program is started on a terminal. Following is a sample warning message that is displayed on a terminal: Deprecated boot argument ks must be used with the inst. prefix. Please use inst.ks instead. Anaconda boot arguments without inst. prefix have been deprecated and will be removed in a future major release. Following is a sample warning message that is displayed in dracut : ks has been deprecated. All usage of Anaconda boot arguments without the inst. prefix have been deprecated and will be removed in a future major release. Please use inst.ks instead. ( BZ#1897657 ) Red Hat Connector is now fully supported You can connect the system using Red Hat Connector ( rhc ). Red Hat Connector consists of a command-line interface and a daemon that allow users to execute Insights remediation playbook directly on their host within the web user interface of Insights (console.redhat.com). Red Hat Connector was available as a Technology Preview in RHEL 8.4 and as of RHEL 8.5, it is fully supported. ( BZ#1957316 ) Ability to override official repositories available By default, the osbuild-composer backend has its own set of official repositories defined in the /usr/share/osbuild-composer/repositories directory. Consequently, it does not inherit the system repositories located in the /etc/yum.repos.d/ directory. You can now override the official repositories. To do that, define overrides in the /etc/osbuild-composer/repositories and, as a result, the files located there take precedence over those in the /usr directory. ( BZ#1915351 ) Image Builder now supports filesystem configuration With this enhancement, you can specify custom filesystem configuration in your blueprints and you can create images with the desired disk layout. As a result, by having non-default layouts, you can benefit from security benchmarks, consistency with existing setups, performance, and protection against out-of-disk errors. To customize the filesystem configuration in your blueprint, set the following customization: ( BZ#2011448 ) Image Builder now supports creating bootable installer images With this enhancement, you can use Image Builder to create bootable ISO images that consist of a tarball file, which contains a root file system. As a result, you can use the bootable ISO image to install the tarball file system to a bare metal system. ( BZ#2019318 ) 4.2. RHEL for Edge Greenboot services now enabled by default Previously, the greenboot services were not present in the default presets so, when the greenboot package was installed, users had to manually enable these greenboot services. With this update, the greenboot services are now present in the default presets configuration and users are no longer required to manually enable it. ( BZ#1935177 ) 4.3. Software management RPM now has read-only support for the sqlite database backend The ability to query an RPM database based on sqlite may be desired when inspecting other root directories, such as containers.This update adds read-only support for the RPM sqlite database backend. As a result, it is now possible to query packages installed in a UBI 9 or Fedora container from the host RHEL 8. To do that with Podman: Mount the container's file system with the podman mount command. Run the rpm -qa command with the --root option pointing to the mounted location. Note that RPM on RHEL 8 still uses the BerkeleyDB database ( bdb ) backend. ( BZ#1938928 ) libmodulemd rebased to version 2.12.1 The libmodulemd packages have been rebased to version 2.12.1. Notable changes include: Added support for version 1 of the modulemd-obsoletes document type, which provides information about a stream obsoleting another one, or a stream reaching its end of life. Added support for version 3 of the modulemd-packager document type, which provides a packager description of a module stream content for a module build system. Added support for the static_context attribute of the version 2 modulemd document type. With that, a module context is now defined by a packager instead of being generated by a module build system. Now, a module stream value is always serialized as a quoted string. ( BZ#1894573 ) libmodulemd rebased to version 2.13.0 The libmodulemd packages have been rebased to version 2.13.0, which provides the following notable changes over the version: Added support for delisting demodularized packages from a module. Added support for validating modulemd-packager-v3 documents with a new --type option of the modulemd-validator tool. Fortified parsing integers. Fixed various modulemd-validator issues. ( BZ#1984402 ) sslverifystatus has been added to dnf configuration With this update, when sslverifystatus option is enabled, dnf checks each server certificate revocation status using the Certificate Status Request TLS extension (OCSP stapling). As a result, when a revoked certificate is encountered, dnf refuses to download from its server. ( BZ#1814383 ) 4.4. Shells and command-line tools ReaR has been updated to version 2.6 Relax-and-Recover (ReaR) has been updated to version 2.6. Notable bug fixes and enhancements include: Added support for eMMC devices. By default, all kernel modules are included in the rescue system. To include specific modules, set the MODULES array variable in the configuration file as: MODULES=( mod1 mod2 ) On the AMD and Intel 64-bit architectures and on IBM Power Systems, Little Endian, a new configuration variable GRUB2_INSTALL_DEVICES is introduced to control the location of the bootloader installation. See the description in /usr/share/rear/conf/default.conf for more details. Improved backup of multipath devices. Files under /media , /run , /mnt , /tmp are automatically excluded from backups as these directories are known to contain removable media or temporary files. See the description of the AUTOEXCLUDE_PATH variable in /usr/share/rear/conf/default.conf . CLONE_ALL_USERS_GROUPS=true is now the default. See the description in /usr/share/rear/conf/default.conf for more details. ( BZ#1988493 ) The modulemd-tools package is now available With this update, the modulemd-tools package has been introduced which provides tools for parsing and generating modulemd YAML files. To install modulemd-tools , use: (BZ#1924850) opencryptoki rebased to version 3.16.0 opencryptoki has been upgraded to version 3.16.0. Notable bug fixes and enhancements include: Improved the protected-key option and support for the attribute-bound keys in the EP11 core processor. Improved the import and export of secure key objects in the cycle-count-accurate (CCA) processor. (BZ#1919223) lsvpd rebased to version 1.7.12 lsvpd has been upgraded to version 1.7.12. Notable bug fixes and enhancements include: Added the UUID property in sysvpd . Improved the NVMe firmware version. Fixed PCI device manufacturer parsing logic. Added recommends clause to the lsvpd configuration file. (BZ#1844428) ppc64-diag rebased to version 2.7.7 ppc64-diag has been upgraded to version 2.7.7. Notable bug fixes and enhancements include: Improved unit test cases. Added the UUID property in sysvpd . The rtas_errd service does not run in the Linux containers. The obsolete logging options are no longer available in the systemd service files. (BZ#1779206) The ipmi_power and ipmi_boot modules are available in the redhat.rhel_mgmt Collection This update provides support to the Intelligent Platform Management Interface ( IPMI ) Ansible modules. IPMI is a specification for a set of management interfaces to communicate with baseboard management controller (BMC) devices. The IPMI modules - ipmi_power and ipmi_boot - are available in the redhat.rhel_mgmt Collection, which you can obtain by installing the ansible-collection-redhat-rhel_mgmt package. (BZ#1843859) udftools 2.3 are now added to RHEL The udftools packages provide user-space utilities for manipulating Universal Disk Format (UDF) file systems. With this enhancement, udftools provides the following set of tools: cdrwtool - It performs actions like blank, format, quick setup, and write to the DVD-R/CD-R/CD-RW media. mkfs.udf , mkudffs - It creates a Universal Disk Format (UDF) filesystem. pktsetup - It sets up and tears down the packet device. udfinfo - It shows information about the Universal Disk Format (UDF) file system. udflabel - It shows or changes the Universal Disk Format (UDF) file system label. wrudf - It provides an interactive shell with cp , rm , mkdir , rmdir , ls , and cd operations on the existing Universal Disk Format (UDF) file system. (BZ#1882531) Tesseract 4.1.1 is now present in RHEL 8.5 Tesseract is an open-source OCR (optical character reading) engine and has the following features: Starting with tesseract version 4, character recognition is based on Long Short-Term Memory (LSTM) neural networks. Supports UTF-8. Supports plain text, hOCR (HTML), PDF, and TSV output formats. ( BZ#1826085 ) Errors when restoring LVM with thin pools do not happen anymore With this enhancement, ReaR now detects when thin pools and other logical volume types with kernel metadata (for example, RAIDs and caches) are used in a volume group (VG) and switches to a mode where it recreates all the logical volumes (LVs) in the VG using lvcreate commands. Therefore, LVM with thin pools are restored without any errors. Note This new method does not preserve all the LV properties, for example LVM UUIDs. A restore from the backup should be tested before using ReaR in a Production environment in order to determine whether the recreated storage layout matches the requirements. ( BZ#1747468 ) Net-SNMP now detects RSA and ECC certificates Previously, Net-Simple Network Management Protocol (Net-SNMP) detected only Rivest, Shamir, Adleman (RSA) certificates. This enhancement adds support for Elliptic Curve Cryptography (ECC). As a result, Net-SNMP now detects RSA and ECC certificates. ( BZ#1919714 ) FCoE option is changed to rd.fcoe Previously, the man page for dracut.cmdline documented rd.nofcoe=0 as the command to turn off Fibre Channel over Ethernet (FCoE). With this update, the command is changed to rd.fcoe . To disable FCoE, run the command rd.fcoe=0 . For further information on FCoE see, Configuring Fibre Channel over Ethernet ( BZ#1929201 ) 4.5. Infrastructure services linuxptp rebased to version 3.1 The linuxptp package has been updated to version 3.1. Notable bug fixes and enhancements include: Added ts2phc program for synchronization of Precision Time Protocol (PTP) hardware clock to Pulse Per Second (PPS) signal. Added support for the automotive profile. Added support for client event monitoring. ( BZ#1895005 ) chrony rebased to version 4.1 chrony has been updated to version 4.1. Notable bug fixes and enhancements include: Added support for Network Time Security (NTS) authentication. For more information, see Overview of Network Time Security (NTS) in chrony . By default, the Authenticated Network Time Protocol (NTP) sources are trusted over non-authenticated NTP sources. Add the autselectmode ignore argument in the chrony.conf file to restore the original behavior. The support for authentication with RIPEMD keys - RMD128 , RMD160 , RMD256 , RMD320 is no longer available. The support for long non-standard MACs in NTPv4 packets is no longer available. If you are using chrony 2.x , non-MD5/SHA1 keys, you need to configure chrony with the version 3 option. ( BZ#1895003 ) PowerTop rebased to version 2.14 PowerTop has been upgraded to version 2.14. This is an update adding Alder Lake, Sapphire Rapids, and Rocket Lake platforms support. (BZ#1834722) TuneD now moves unnecessary IRQs to housekeeping CPUs Network device drivers like i40e , iavf , mlx5 , evaluate the online CPUs to determine the number of queues and hence the MSIX vectors to be created. In low-latency environments with a large number of isolated and very few housekeeping CPUs, when TuneD tries to move these device IRQs to the housekeeping CPUs it fails due to the per CPU vector limit. With this enhancement, TuneD explicitly adjusts the numbers of network device channels (and hence MSIX vectors) as per the housekeeping CPUs. Therefore, all the device IRQs can now be moved on the housekeeping CPUs to achieve low latency. (BZ#1951992) 4.6. Security libreswan rebased to 4.4 The libreswan packages have been upgraded to upstream version 4.4, which introduces many enhancements and bug fixes. Most notably: The IKEv2 protocol: Introduced fixes for TCP encapsulation in Transport Mode and host-to-host connections. Added the --globalstatus option to the ipsec whack command for displaying redirect statistics. The vhost and vnet values in the ipsec.conf configuration file are no longer allowed for IKEv2 connections. The pluto IKE daemon: Introduced fixes for host-to-host connections that use non-standard IKE ports. Added peer ID ( IKEv2 IDr or IKEv1 Aggr ) to select the best initial connection. Disabled the interface-ip= option because Libreswan does not provide the corresponding functionality yet. Fixed the PLUTO_PEER_CLIENT variable in the ipsec__updown script for NAT in Transport Mode . Set the PLUTO_CONNECTION_TYPE variable to transport or tunnel . Non-templated wildcard ID connections can now match. (BZ#1958968) GnuTLS rebased to 3.6.16 The gnutls packages have been updated to version 3.6.16. Notable bug fixes and enhancements include: The gnutls_x509_crt_export2() function now returns 0 instead of the size of the internal base64 blob in case of success. This aligns with the documentation in the gnutls_x509_crt_export2(3) man page. Certificate verification failures due to the Online Certificate Status Protocol (OCSP) must-stapling not being followed are now correctly marked with the GNUTLS_CERT_INVALID flag. Previously, even when TLS 1.2 was explicitly disabled through the -VERS-TLS1.2 option, the server still offered TLS 1.2 if TLS 1.3 was enabled. The version negotiation has been fixed, and TLS 1.2 can now be correctly disabled. (BZ#1956783) socat rebased to 1.7.4 The socat packages have been upgraded from version 1.7.3 to 1.7.4, which provides many bug fixes and improvements. Most notably: GOPEN and UNIX-CLIENT addresses now support SEQPACKET sockets. The generic setsockopt-int and related options are, in the case of listening or accepting addresses, applied to the connected sockets. To enable setting options on a listening socket, the setsockopt-listen option is now available. Added the -r and -R options for a raw dump of transferred data to a file. Added the ip-transparent option and the IP_TRANSPARENT socket option. OPENSSL-CONNECT now automatically uses the SNI feature and the openssl-no-sni option turns SNI off. The openssl-snihost option overrides the value of the openssl-commonname option or the server name. Added the accept-timeout and listen-timeout options. Added the ip-add-source-membership option. UDP-DATAGRAM address now does not check peer port of replies as it did in 1.7.3. Use the sourceport optioon if your scenario requires the behavior. New proxy-authorization-file option reads PROXY-CONNECT credentials from a file and enables to hide this data from the process table. Added AF_VSOCK support for VSOCK-CONNECT and VSOCK-LISTEN addresses. ( BZ#1947338 ) crypto-policies rebased to 20210617 The crypto-policies packages have been upgraded to upstream version 20210617, which provides a number of enhancements and bug fixes over the version, most notably: You can now use scoped policies to enable different sets of algorithms for different back ends. Each configuration directive can now be limited to specific protocols, libraries, or services. For a complete list of available scopes and details on the new syntax, see the crypto-policies(7) man page. For example, the following directive allows using AES-256-CBC cipher with the SSH protocol, impacting both the libssh library and the OpenSSH suite: Directives can now use asterisks for specifying multiple values using wildcards. For example, the following directive disables all CBC mode ciphers for applications using libssh : Note that future updates can introduce new algorithms matched by the current wildcards. ( BZ#1960266 ) crypto-policies now support AES-192 ciphers in custom policies The system-wide cryptographic policies now support the following values for the cipher option in custom policies and subpolicies: AES-192-GCM , AES-192-CCM , AES-192-CTR , and AES-192-CBC . As a result, you can enable the AES-192-GCM and AES-192-CBC ciphers for the Libreswan application and the AES-192-CTR and AES-192-CBC ciphers for the libssh library and the OpenSSH suite through crypto-policies . (BZ#1876846) CBC ciphers disabled in the FUTURE cryptographic policy This update of the crypto-policies packages disables ciphers that use cipher block chaining (CBC) mode in the FUTURE policy. The settings in FUTURE should withstand near-term future attacks, and this change reflects the current progress. As a result, system components respecting crypto-policies cannot use CBC mode when the FUTURE policy is active. (BZ#1933016) Adding new kernel AVC tracepoint With this enhancement, a new avc:selinux_audited kernel tracepoint is added that triggers when an SELinux denial is to be audited. This feature allows for more convenient low-level debugging of SELinux denials. The new tracepoint is available for tools such as perf . (BZ#1954024) New ACSC ISM profile in the SCAP Security Guide The scap-security-guide packages now provide the Australian Cyber Security Centre (ACSC) Information Security Manual (ISM) compliance profile and a corresponding Kickstart file. With this enhancement, you can install a system that conforms with this security baseline and use the OpenSCAP suite for checking security compliance and remediation using the risk-based approach for security controls defined by ACSC. (BZ#1955373) SCAP Security Guide rebased to 0.1.57 The scap-security-guide packages have been rebased to upstream version 0.1.57, which provides several bug fixes and improvements. Most notably: The Australian Cyber Security Centre ( ACSC ) Information Security Manual ( ISM ) profile has been introduced. The profile extends the Essential Eight profile and adds more security controls defined in the ISM. The Center for Internet Security ( CIS ) profile has been restructured into four different profiles respecting levels of hardening and system type (server and workstation) as defined in the official CIS benchmarks. The Security Technical Implementation Guide ( STIG ) security profile has been updated, and implements rules from the recently-released version V1R3. The Security Technical Implementation Guide with GUI ( STIG with GUI ) security profile has been introduced. The profile derives from the STIG profile and is compatible with RHEL installations that select the Server with GUI package selection. The ANSSI High level profile, which is based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. This contains a profile implementing rules of High hardening levels. ( BZ#1966577 ) OpenSCAP rebased to 1.3.5 The OpenSCAP packages have been rebased to upstream version 1.3.5. Notable fixes and enhancements include: Enabled Schematron-based validation by default for the validate command of oval and xccdf modules. Added SCAP 1.3 source data stream Schematron. Added XML signature validation. Allowed clamping mtime to SOURCE_DATE_EPOCH . Added severity and role attributes. Support for requires and conflicts elements of the Rule and Group (XCCDF). Kubernetes remediation in the HTML report. Handling gpfs , proc and sysfs file systems as non-local. Fixed handling of common options styled as --arg=val . Fixed behavior of the StateType operator. Namespace ignored in XPath expressions ( xmlfilecontent ) to allow for incomplete XPath queries. Fixed a problem that led to a warning about the presence of obtrusive data. Fixed multiple segfaults and a broken test in the --stig-viewer feature. Fixed the TestResult/benchmark/@href attribute. Fixed many memory management issues. Fixed many memory leaks. ( BZ#1953092 ) Validation of digitally signed SCAP source data streams To conform with the Security Content Automation Protocol (SCAP) 1.3 specifications, OpenSCAP now validates digital signatures of digitally signed SCAP source data streams. As a result, OpenSCAP validates the digital signature when evaluating a digitally signed SCAP source data stream. The signature validation is performed automatically while loading the file. Data streams with invalid signatures are rejected, and OpenSCAP does not evaluate their content. OpenSCAP uses the XML Security Library with the OpenSSL cryptography library to validate the digital signature. You can skip the signature validation by adding the --skip-signature-validation option to the oscap xccdf eval command. Important OpenSCAP does not address the trustworthiness of certificates or public keys that are part of the KeyInfo signature element and that are used to verify the signature. You should verify such keys by yourselves to prevent evaluation of data streams that have been modified and signed by bad actors. ( BZ#1966612 ) New DISA STIG profile compatible with Server with GUI installations A new profile, DISA STIG with GUI , has been added to the SCAP Security Guide . This profile is derived from the DISA STIG profile and is compatible with RHEL installations that selected the Server with GUI package group. The previously existing stig profile was not compatible with Server with GUI because DISA STIG demands uninstalling any Graphical User Interface. However, this can be overridden if properly documented by a Security Officer during evaluation. As a result, the new profile helps when installing a RHEL system as a Server with GUI aligned with the DISA STIG profile. ( BZ#1970137 ) STIG security profile updated to version V1R3 The DISA STIG for Red Hat Enterprise Linux 8 profile in the SCAP Security Guide has been updated to align with the latest version V1R3 . The profile is now also more stable and better aligns with the RHEL 8 STIG (Security Technical Implementation Guide) manual benchmark provided by the Defense Information Systems Agency (DISA). This second iteration brings approximately 90% of coverage with regards to the STIG. You should use only the current version of this profile because older versions are no longer valid. Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. ( BZ#1993056 ) Three new CIS profiles in SCAP Security Guide Three new compliance profiles aligned with the Center for Internet Security (CIS) Red Hat Enterprise Linux 8 Benchmark have been introduced to the SCAP Security Guide. The CIS RHEL 8 Benchmark provides different configuration recommendations for "Server" and "Workstation" deployments, and defines two levels of configuration, "level 1" and "level 2" for each deployment. The CIS profile previously shipped in RHEL8 represented only the "Server Level 2". The three new profiles complete the scope of the CIS RHEL8 Benchmark profiles, and you can now more easily evaluate your system against CIS recommendations. All currently available CIS RHEL 8 profiles are: Workstation Level 1 xccdf_org.ssgproject.content_profile_cis_workstation_l1 Workstation Level 2 xccdf_org.ssgproject.content_profile_cis_workstation_l2 Server Level 1 xccdf_org.ssgproject.content_profile_cis_server_l1 Server Level 2 xccdf_org.ssgproject.content_profile_cis ( BZ#1993197 ) Performance of remediations for Audit improved by grouping similar system calls Previously, Audit remediations generated an individual rule for each system call audited by the profile. This led to large numbers of audit rules, which degraded performance. With this enhancement, remediations for Audit can group rules for similar system calls with identical fields together into a single rule, which improves performance. Examples of system calls grouped together: ( BZ#1876483 ) Added profile for ANSSI-BP-028 High level The ANSSI High level profile, based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. This completes the availability of profiles for all ANSSI-BP-028 v1.2 hardening levels in the SCAP Security Guide . With the new profile, you can harden the system to the recommendations from ANSSI for GNU/Linux Systems at the High hardening level. As a result, you can configure and automate compliance of your RHEL 8 systems to the strictest hardening level by using the ANSSI Ansible Playbooks and the ANSSI SCAP profiles. ( BZ#1955183 ) OpenSSL added for encrypting Rsyslog TCP and RELP traffic The OpenSSL network stream driver has been added to Rsyslog. This driver implements TLS-protected transport using the OpenSSL library. This provides additional functionality compared to the stream driver using the GnuTLS library. As a result, you can now use either OpenSSL or GnuTLS as an Rsyslog network stream driver. ( BZ#1891458 ) Rsyslog rebased to 8.2102.0-5 The rsyslog packages have been rebased to upstream version 8.2102.0-5, which provides the following notable changes over the version: Added the exists() script function to check whether a variable exists or not, for example USD!path!var . Added support for setting OpenSSL configuration commands with a new configuration parameter tls.tlscfgcmd for the omrelp and imrelp modules. Added new rate-limit options to the omfwd module for rate-limiting syslog messages sent to the remote server: ratelimit.interval specifies the rate-limiting interval in seconds. ratelimit.burst specifies the rate-limiting burst in the number of messages. Rewritten the immark module with various improvements. Added the max sessions config parameter to the imptcp module. The maximum is measured per instance, not globally across all instances. Added the rsyslog-openssl subpackage; this network stream driver implements TLS-protected transport using the OpenSSL library. Added per-minute rate limiting to the imfile module with the MaxBytesPerMinute and MaxLinesPerMinute options. These options accept integer values and limit the number of bytes or lines that may be sent in a minute. Added support to the imtcp and omfwd module to configure a maximum depth for the certificate chain verification with the streamdriver.TlsVerifyDepth option. ( BZ#1932795 ) 4.7. Networking Support for pause parameter of ethtool in NetworkManager Non auto-pause parameters need to be set explicitly on a specific network interface in certain cases. Previously, NetworkManager could not pause the control flow parameters of ethtool in nmstate . To disable the auto negotiation of the pause parameter and enable RX/TX pause support explicitly, use the following command: ( BZ#1899372 ) New property in NetworkManager for setting physical and virtual interfaces in promiscuous mode With this update the 802-3-ethernet.accept-all-mac-addresses property has been added to NetworkManager for setting physical and virtual interfaces in the accept all MAC addresses mode. With this update, the kernel can accept network packages targeting current interfaces' MAC address in the accept all MAC addresses mode. To enable accept all MAC addresses mode on eth1 , use the following command: ( BZ#1942331 ) NetworkManager rebased to version 1.32.10 The NetworkManager packages have been upgraded to upstream version 1.32.10, which provides a number of enhancements and bug fixes over the version. For further information about notable changes, read the upstream release notes for this version. ( BZ#1934465 ) NetworkManager now supports nftables as firewall back end This enhancement adds support for the nftables firewall framework to NetworkManager. To switch the default back end from iptables to nftables : Create the /etc/NetworkManager/conf.d/99-firewall-backend.conf file with the following content: Reload the NetworkManager service. (BZ#1548825) firewalld rebased to version 0.9.3 The firewalld packages have been upgraded to upstream version 0.9.3, which provides a number of enhancements and bug fixes over the version. For further details, see the upstream release notes: firewalld 0.9.3 Release Notes firewalld 0.9.2 Release Notes firewalld 0.8.6 Release Notes firewalld 0.8.5 Release Notes firewalld 0.8.4 Release Notes ( BZ#1872702 ) The firewalld policy objects feature is now available Previously, you could not use firewalld to filter traffic flowing between virtual machines, containers, and zones. With this update, the firewalld policy objects feature has been introduced, which provides forward and output filtering in firewalld . (BZ#1492722) Multipath TCP is now fully supported Starting with RHEL 8.5, Multipath TCP (MPTCP) is fully supported. MPTCP improves resource usage within the network and resilience to network failure. For example, with Multipath TCP on the RHEL server, smartphones with MPTCP v1 enabled can connect to an application running on the server and switch between Wi-Fi and cellular networks without interrupting the connection to the server. RHEL 8.5 introduced additional features, such as: Multiple concurrent active substreams Active-backup support Improved stream performances Better memory usage, with receive and send buffer auto-tuning SYN cookie support Note that either the applications running on the server must natively support MPTCP or administrators must load an eBPF program into the kernel to dynamically change IPPROTO_TCP to IPPROTO_MPTCP . For further details see, Getting started with Multipath TCP . (JIRA:RHELPLAN-57712) Alternative network interface naming is now available in RHEL Alternative interface naming is the RHEL kernel configuration, which provides the following networking benefits: Network interface card (NIC) names can have arbitrary length. One NIC can have multiple names at the same time. Usage of alternative names as handles for commands. (BZ#2164986) 4.8. Kernel Kernel version in RHEL 8.5 Red Hat Enterprise Linux 8.5 is distributed with the kernel version 4.18.0-348. ( BZ#1839151 ) EDAC for Intel Sapphire Rapids processors is now supported This enhancement provides Error Detection And Correction (EDAC) device support for Intel Sapphire Rapids processors. EDAC mainly handles Error Code Correction (ECC) memory and detects and reports PCI bus parity errors. (BZ#1837389) The bpftrace package rebased to version 0.12.1 The bpftrace package has been upgraded to version 0.12.1, which provides multiple bug fixes and enhancements. Notable changes over versions include: Added the new builtin path, which is a new reliable method to display the full path from a path structure. Added wildcard support for kfunc probes and tracepoint categories. ( BZ#1944716 ) vmcore capture works as expected after CPU hot-add or hot-removal operations Previously, on IBM POWER systems, after every CPU or memory hot-plug or removal operation, the CPU data on the device tree became stale unless the kdump.service is reloaded. To reload the latest CPU information, the kdump.service parses through the device nodes to fetch the CPU information. However, some of the CPU nodes are already lost during its hot-removal. Consequently, a race condition between the kdump.service reload and a CPU hot-removal happens at the same time and this may cause the dump to fail. A subsequent crash might then not capture the vmcore file. This update eliminates the need to reload the kdump.service after a CPU hot-plug and the vmcore capture works as expected in the described scenario. Note: This enhancement works as expected for firmware-assisted dumps ( fadump ). In the case of standard kdump , the kdump.service reload takes place during the hot-plug operation. (BZ#1922951) The kdumpctl command now supports the new kdumpctl estimate utility The kdumpctl command now supports the kdumpctl estimate utility. Based on the existing kdump configuration, kdumpctl estimate prints a suitable estimated value for kdump memory allocation. The minimum size of the crash kernel may vary depending on the hardware and machine specifications. Hence, previously, it was difficult to estimate an accurate crashkernel= value. With this update, the kdumpctl estimate utility provides an estimated value. This value is a best effort recommended estimate and can serve as a good reference to configure a feasible crashkernel= value. (BZ#1879558) IBM TSS 2.0 package rebased to 1.6.0 The IBM's Trusted Computing Group (TCG) Software Stack (TSS) 2.0 binary package has been upgraded to 1.6.0. This update adds the IBM TSS 2.0 support on AMD64 and Intel 64 architecture. It is a user space TSS for Trusted Platform Modules (TPM) 2.0 and implements the functionality equivalent to (but not API compatible with) the TCG TSS working group's Enhanced System Application Interface (ESAPI), System Application Interface (SAPI), and TPM Command Transmission Interface (TCTI) API with a simpler interface. It is a security middleware that allows applications and platforms to share and integrate the TPM into secure applications. This rebase provides many bug fixes and enhancements over the version. The most notable changes include the following new attributes: tsscertifyx509 : validates the x509 certificate tssgetcryptolibrary : displays the current cryptographic library tssprintattr : prints the TPM attributes as text tsspublicname : calculates the public name of an entity tsssetcommandcodeauditstatus : clears or sets code via TPM2_SetCommandCodeAuditStatus tsstpmcmd : sends an in-band TPM simulator signal (BZ#1822073) The schedutil CPU frequency governor is now available on RHEL 8 The schedutil CPU governor uses CPU utilization data available on the CPU scheduler. schedutil is a part of the CPU scheduler and it can access the scheduler's internal data structures directly. schedutil controls how the CPU would raise and lower its frequency in response to system load. You must manually select the schedutil frequency governor as it is not enabled as default. There is one policyX directory per CPU. schedutil is available in the policyX/scaling_governors list of the existing CPUFreq governors in the kernel and is attached to /sys/devices/system/cpu/cpufreq/policyx policy. The policy file can be overwritten to change it. Note that when using intel_pstate scaling drivers, it might be necessary to configure the intel_pstate=passive command line argument for intel_pstate to become available and be listed by the governor. intel_pstate is the default on Intel hardware with any modern CPU. (BZ#1938339) The rt-tests suite rebased to rt-tests-2.1 upstream version The rt-tests suite has been rebased to rt-tests-2.1 version, which provides multiple bug fixes and enhancements. The notable changes over the version include: Fixes to various programs in the rt-tests suite. Fixes to make programs more uniform with the common set of options, for example, the oslat program's option -t --runtime option is renamed to -D to specify the run duration to match the rest of the suite. Implements a new feature to output data in json format. ( BZ#1954387 ) Intel(R) QuickAssist Technology Library (QATlib) was rebased to version 21.05 The qatlib package has been rebased to version 21.05, which provides multiple bug fixes and enhancements. Notable changes include: Adding support for several encryption algorithms: AES-CCM 192/256 ChaCha20-Poly1305 PKE 8K (RSA, DH, ModExp, ModInv) Fixing device enumeration on different nodes Fixing pci_vfio_set_command for 32-bit builds For more information about QATlib installation, check Ensuring that Intel(R) QuickAssist Technology stack is working correctly on RHEL 8 . (BZ#1920237) 4.9. File systems and storage xfs_quota state command now outputs all grace times when multiple quota types are specified The xfs_quota state command now outputs grace times for multiple quota types specified on the command line. Previously, only one was shown even if more than one of -g , -p , or -u was specified. (BZ#1949743) -H option added to the rpc.gssd daemon and the set-home option added to the /etc/nfs.conf file This patch adds the -H option to rpc.gssd and the set-home option into /etc/nfs.conf , but does not change the default behavior. By default, rpc.gssd sets USDHOME to / to avoid possible deadlock that may happen when users' home directories are on an NFS share with Kerberos security. If either the -H option is added to rpc.gssd , or set-home=0 is added to /etc/nfs.conf , rpc.gssd does not set USDHOME to / . These options allow you to use Kerberos k5identity files in USDHOME/.k5identity and assumes NFS home directory is not on an NFS share with Kerberos security. These options are provided for use in only specific environments, such as the need for k5identity files. For more information see the k5identity man page. (BZ#1868087) The storage RHEL system role now supports LVM VDO volumes Virtual Data Optimizer (VDO) helps to optimize usage of the storage volumes. With this enhancement, administrators can use the storage system role to manage compression and deduplication on Logical Manager Volumes (LVM) VDO volumes. ( BZ#1882475 ) 4.10. High availability and clusters Local mode version of pcs cluster setup command is now fully supported By default, the pcs cluster setup command automatically synchronizes all configuration files to the cluster nodes. Since RHEL 8.3, the pcs cluster setup command has provided the --corosync-conf option as a Technology Preview. This feature is now fully supported in RHEL 8.5. Specifying this option switches the command to local mode. In this mode, the pcs command-line interface creates a corosync.conf file and saves it to a specified file on the local node only, without communicating with any other node. This allows you to create a corosync.conf file in a script and handle that file by means of the script. ( BZ#1839637 ) Ability to configure watchdog-only SBD for fencing on subset of cluster nodes Previously, to use a watchdog-only SBD configuration, all nodes in the cluster had to use SBD. That prevented using SBD in a cluster where some nodes support it but other nodes (often remote nodes) required some other form of fencing. Users can now configure a watchdog-only SBD setup using the new fence_watchdog agent, which allows cluster configurations where only some nodes use watchdog-only SBD for fencing and other nodes use other fencing types. A cluster may only have a single such device, and it must be named watchdog . ( BZ#1443666 ) New pcs command to update SCSI fencing device without causing restart of all other resources Updating a SCSI fencing device with the pcs stonith update command causes a restart of all resources running on the same node where the stonith resource was running. The new pcs stonith update-scsi-devices command allows you to update SCSI devices without causing a restart of other cluster resources. ( BZ#1872378 ) New reduced output display option for pcs resource safe-disable command The pcs resource safe-disable and pcs resource disable --safe commands print a lengthy simulation result after an error report. You can now specify the --brief option for those commands to print errors only. The error report now always contains resource IDs of affected resources. ( BZ#1909901 ) pcs now accepts Promoted and Unpromoted as role names The pcs command-line interface now accepts Promoted and Unpromoted anywhere roles are specified in Pacemaker configuration. These role names are the functional equivalent of the Master and Slave Pacemaker roles. Master and Slave remain the names for these roles in configuration displays and help text. ( BZ#1885293 ) New pcs resource status display commands The pcs resource status and the pcs stonith status commands now support the following options: You can display the status of resources configured on a specific node with the pcs resource status node= node_id command and the pcs stonith status node= node_id command. You can use these commands to display the status of resources on both cluster and remote nodes. You can display the status of a single resource with the pcs resource status resource_id and the pcs stonith status resource_id commands. You can display the status of all resources with a specified tag with the pcs resource status tag_id and the pcs stonith status tag_id commands. ( BZ#1290830 , BZ#1285269) New LVM volume group flag to control autoactivation LVM volume groups now support a setautoactivation flag which controls whether logical volumes that you create from a volume group will be automatically activated on startup. When creating a volume group that will be managed by Pacemaker in a cluster, set this flag to n with the vgcreate --setautoactivation n command for the volume group to prevent possible data corruption. If you have an existing volume group used in a Pacemaker cluster, set the flag with vgchange --setautoactivation n . ( BZ#1899214 ) 4.11. Dynamic programming languages, web and database servers The nodejs:16 module stream is now fully supported The nodejs:16 module stream, previously available as a Technology preview, is fully supported with the release of the RHSA-2021:5171 advisory. The nodejs:16 module stream now provides Node.js 16.13.1 , which is a Long Term Support (LTS) version. Node.js 16 included in RHEL 8.5 provides numerous new features and bug and security fixes over Node.js 14 available since RHEL 8.3. Notable changes include: The V8 engine has been upgraded to version 9.4. The npm package manager has been upgraded to version 8.1.2. A new Timers Promises API provides an alternative set of timer functions that return Promise objects. Node.js now provides a new experimental Web Streams API. Node.js now includes Corepack , an experimental tool that enables you to use package managers configured in the given project without the need to manually install them. Node.js now provides an experimental ECMAScript modules (ESM) loader hooks API, which consolidates ESM loader hooks. To install the nodejs:16 module stream, use: If you want to upgrade from the nodejs:14 stream, see Switching to a later stream . (BZ#1953991, BZ#2027610) A new module stream: ruby:3.0 RHEL 8.5 introduces Ruby 3.0.2 in a new ruby:3.0 module stream. This version provides a number of performance improvements, bug and security fixes, and new features over Ruby 2.7 distributed with RHEL 8.3. Notable enhancements include: Concurrency and parallelism features: Ractor , an Actor-model abstraction that provides thread-safe parallel execution, is provided as an experimental feature. Fiber Scheduler has been introduced as an experimental feature. Fiber Scheduler intercepts blocking operations, which enables light-weight concurrency without changing existing code. Static analysis features: The RBS language has been introduced, which describes the structure of Ruby programs. The rbs gem has been added to parse type definitions written in RBS . The TypeProf utility has been introduced, which is a type analysis tool for Ruby code. Pattern matching with the case/in expression is no longer experimental. One-line pattern matching, which is an experimental feature, has been redesigned. Find pattern has been added as an experimental feature. The following performance improvements have been implemented: Pasting long code to the Interactive Ruby Shell (IRB) is now significantly faster. The measure command has been added to IRB for time measurement. Other notable changes include: Keyword arguments have been separated from other arguments. The default directory for user-installed gems is now USDHOME/.local/share/gem/ unless the USDHOME/.gem/ directory is already present. To install the ruby:3.0 module stream, use: If you want to upgrade from an earlier ruby module stream, see Switching to a later stream . ( BZ#1938942 ) Changes in the default separator for the Python urllib parsing functions To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change was implemented in Python 3.6 with the release of RHEL 8.4, and now is being backported to Python 3.8 and Python 2.7. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) Knowledgebase article. Python 3.9 is unaffected and already includes the new default separator ( & ), which can be changed only by passing the separator parameter when calling the urllib.parse.parse_qsl and urllib.parse.parse_qs functions in Python code. (BZ#1935686, BZ#1931555, BZ#1969517) The Python ipaddress module no longer allows zeros in IPv4 addresses To mitigate CVE-2021-29921 , the Python ipaddress module now rejects IPv4 addresses with leading zeros with an AddressValueError: Leading zeros are not permitted error. This change has been introduced in the python38 and python39 modules. Earlier Python versions distributed in RHEL are not affected by CVE-2021-29921. Customers who rely on the behavior can pre-process their IPv4 address inputs to strip the leading zeros off. For example: To strip the leading zeros off with an explicit loop for readability, use: (BZ#1986007, BZ#1970504, BZ#1970505) The php:7.4 module stream rebased to version 7.4.19 The PHP scripting language, provided by the php:7.4 module stream, has been upgraded from version 7.4.6 to version 7.4.19. This update provides multiple security and bug fixes. (BZ#1944110) A new package: pg_repack A new pg_repack package has been added to the postgresql:12 and postgresql:13 module streams. The pg_repack package provides a PostgreSQL extension that lets you remove bloat from tables and indexes, and optionally restore physical order of clustered indexes. (BZ#1967193, BZ#1935889) A new module stream: nginx:1.20 The nginx 1.20 web and proxy server is now available as the nginx:1.20 module stream. This update provides a number of bug fixes, security fixes, new features, and enhancements over the previously released version 1.18. New features: nginx now supports client SSL certificate validation with Online Certificate Status Protocol (OCSP). nginx now supports cache clearing based on the minimum amount of free space. This support is implemented as the min_free parameter of the proxy_cache_path directive. A new ngx_stream_set_module module has been added, which enables you to set a value for a variable. Enhanced directives: Multiple new directives are now available, such as ssl_conf_command and ssl_reject_handshake . The proxy_cookie_flags directive now supports variables. Improved support for HTTP/2: The ngx_http_v2 module now includes the lingering_close , lingering_time , lingering_timeout directives. Handling connections in HTTP/2 has been aligned with HTTP/1.x. From nginx 1.20 , use the keepalive_timeout and keepalive_requests directives instead of the removed http2_recv_timeout , http2_idle_timeout , and http2_max_requests directives. To install the nginx:1.20 stream, use: If you want to upgrade from the nginx:1.20 stream, see Switching to a later stream . (BZ#1945671) The squid:4 module stream rebased to version 4.15 The Squid proxy server, available in the squid:4 module stream, has been upgraded from version 4.11 to version 4.15. This update provides various bug and security fixes. (BZ#1964384) LVM system.devices file feature now available in RHEL 8 RHEL 8.5 introduces the LVM system.devices file feature. By creating a list of devices in the /etc/lvm/devices/system.devices file, you can select specific devices for LVM to recognize and use, and prevent LVM from using unwanted devices. To enable the system.devices file feature, set use_devicesfile=1 in the lvm.conf configuration file and add devices to the system.devices file. LVM ignores any devices filter settings while the system.devices file feature is enabled. To prevent warning messages, remove your filter settings from the lvm.conf file. For more information, see the lvmdevices(8) man page. (BZ#1922312) quota now supports HPE XFS The quota utilities now provide support for the HPE XFS file system. As a result, users of HPE XFS can monitor and and manage user and group disk usage through quota utilities. (BZ#1945408) mutt rebased to version 2.0.7 The Mutt email client has been updated to version 2.0.7, which provides a number of enhancements and bug fixes. Notable changes include: Mutt now provides support for the OAuth 2.0 authorization protocol using the XOAUTH2 mechanism. Mutt now also supports the OAUTHBEARER authentication mechanism for the IMAP, POP, and SMTP protocols. The OAuth-based functionality is provided through external scripts. As a result, you can connect Mutt with various cloud email providers, such as Gmail using authentication tokens. For more information on how to set up Mutt with OAuth support, see How to set up Mutt with Gmail using OAuth2 authentication . Mutt adds support for domain-literal email addresses, for example, user@[IPv6:fcXX:... ] . The new USDssl_use_tlsv1_3 configuration variable allows TLS 1.3 connections if they are supported by the email server. This variable is enabled by default. The new USDimap_deflate variable adds support for the COMPRESS=DEFLATE compression. The variable is disabled by default. The USDssl_starttls variable no longer controls aborting an unencrypted IMAP PREAUTH connection. Use the USDssl_force_tls variable instead if you rely on the STARTTLS process. Note that even after an update to the new Mutt version, the ssl_force_tls configuration variable still defaults to no to prevent RHEL users from encountering problems in their existing environments. In the upstream version of Mutt , ssl_force_tls is now enabled by default. ( BZ#1912614 , BZ#1890084 ) 4.12. Compilers and development tools Go Toolset rebased to version 1.16.7 Go Toolset has been upgraded to version 1.16.7. Notable changes include: The GO111MODULE environment variable is now set to on by default. To revert this setting, change GO111MODULE to auto . The Go linker now uses less resources and improves code robustness and maintainability. This applies to all supported architectures and operating systems. With the new embed package you can access embedded files while compiling programs. All functions of the io/ioutil package have been moved to the io and os packages. While you can still use io/ioutil , the io and os packages provide better definitions. The Delve debugger has been rebased to 1.6.0 and now supports Go 1.16.7 Toolset. For more information, see Using Go Toolset . (BZ#1938071) Rust Toolset rebased to version 1.54.0 Rust Toolset has been updated to version 1.54.0. Notable changes include: The Rust standard library is now available for the wasm32-unknown-unknown target. With this enhancement, you can generate WebAssembly binaries, including newly stabilized intrinsics. Rust now includes the IntoIterator implementation for arrays. With this enhancement, you can use the IntoIterator trait to iterate over arrays by value and pass arrays to methods. However, array.into_iter() still iterates values by reference until the 2021 edition of Rust. The syntax for or patterns now allows nesting anywhere in the pattern. For example: Pattern(1|2) instead of Pattern(1)|Pattern(2) . Unicode identifiers can now contain all valid identifier characters as defined in the Unicode Standard Annex #31. Methods and trait implementations have been stabilized. Incremental compilation is re-enabled by default. For more information, see Using Rust Toolset . (BZ#1945805) LLVM Toolset rebased to version 12.0.1 LLVM Toolset has been upgraded to version 12.0.1. Notable changes include: The new compiler flag -march=x86-64-v[234] has been added. The compiler flag -fasynchronous-unwind-tables of the Clang compiler is now the default on Linux AArch64/PowerPC. The Clang compiler now supports the C++20 likelihood attributes [[likely]] and [[unlikely]]. The new function attribute tune-cpu has been added. It allows microarchitectural optimizations to be applied independently from the target-cpu attribute or TargetMachine CPU. The new sanitizer -fsanitize=unsigned-shift-base has been added to the integer sanitizer -fsanitize=integer to improve security. Code generation on PowerPC targets has been optimized. The WebAssembly backend is now enabled in LLVM. With this enhancement, you can generate WebAssembly binaries with LLVM and Clang. For more information, see Using LLVM Toolset . (BZ#1927937) CMake rebased to version 3.20.2 CMake has been rebased from 3.18.2 to 3.20.2. To use CMake on a project that requires the version 3.20.2 or less, use the command cmake_minimum_required(version 3.20.2). Notable changes include: C++23 compiler modes can now be specified by using the target properties CXX_STANDARD , CUDA_STANDARD , OBJCXX_STANDARD , or by using the cxx_std_23 meta-feature of the compile features function. CUDA language support now allows the NVIDIA CUDA compiler to be a symbolic link. The Intel oneAPI NextGen LLVM compilers are now supported with the IntelLLVM compiler ID . CMake now facilitates cross compiling for Android by merging with the Android NDK's toolchain file. When running cmake(1) to generate a project build system, unknown command-line arguments starting with a hyphen are now rejected. For further information on new features and deprecated functionalities, see the CMake Release Notes . (BZ#1957947) New GCC Toolset 11 GCC Toolset 11 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. The following components have been rebased since GCC Toolset 10: GCC to version 11.2 GDB to version 10.2 Valgrind to version 3.17.0 SystemTap to version 4.5 binutils to version 2.36 elfutils to version 0.185 dwz to version 0.14 Annobin to version 9.85 For a complete list of components, see GCC Toolset 11 . To install GCC Toolset 11, run the following command as root: To run a tool from GCC Toolset 11: To run a shell session where tool versions from GCC Toolset 11 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 11 components are also available in the two container images: rhel8/gcc-toolset-11-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-11-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 11 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. (BZ#1953094) .NET updated to version 6.0 Red Hat Enterprise Linux 8.5 is distributed with .NET version 6.0. Notable improvements include: Support for 64-bit Arm (aarch64) Support for IBM Z and LinuxONE (s390x) For more information, see Release Notes for .NET 6.0 RPM packages and Release Notes for .NET 6.0 containers . ( BZ#2022794 ) GCC Toolset 11: GCC rebased to version 11.2 In GCC Toolset 11, the GCC package has been updated to version 11.2. Notable bug fixes and enhancements include: General improvements GCC now defaults to the DWARF Version 5 debugging format. Column numbers shown in diagnostics represent real column numbers by default and respect multicolumn characters. The straight-line code vectorizer considers the whole function when vectorizing. A series of conditional expressions that compare the same variable can be transformed into a switch statement if each of them contains a comparison expression. Interprocedural optimization improvements: A new IPA-modref pass, controlled by the -fipa-modref option, tracks side effects of function calls and improves the precision of points-to analysis. The identical code folding pass, controlled by the -fipa-icf option, was significantly improved to increase the number of unified functions and reduce compile-time memory use. Link-time optimization improvements: Memory allocation during linking was improved to reduce peak memory use. Using a new GCC_EXTRA_DIAGNOSTIC_OUTPUT environment variable in IDEs, you can request machine-readable "fix-it hints" without adjusting build flags. The static analyzer, run by the -fanalyzer option, is improved significantly with numerous bug fixes and enhancements provided. Language-specific improvements C family C and C++ compilers support non-rectangular loop nests in OpenMP constructs and the allocator routines of the OpenMP 5.0 specification. Attributes: The new no_stack_protector attribute marks functions that should not be instrumented with stack protection ( -fstack-protector ). The improved malloc attribute can be used to identify allocator and deallocator API pairs. New warnings: -Wsizeof-array-div , enabled by the -Wall option, warns about divisions of two sizeof operators when the first one is applied to an array and the divisor does not equal the size of the array element. -Wstringop-overread , enabled by default, warns about calls to string functions that try to read past the end of the arrays passed to them as arguments. Enhanced warnings: -Wfree-nonheap-object detects more instances of calls to deallocation functions with pointers not returned from a dynamic memory allocation function. -Wmaybe-uninitialized diagnoses the passing of pointers and references to uninitialized memory to functions that take const -qualified arguments. -Wuninitialized detects reads from uninitialized dynamically allocated memory. C Several new features from the upcoming C2X revision of the ISO C standard are supported with the -std=c2x and -std=gnu2x options. For example: The standard attribute is supported. The __has_c_attribute preprocessor operator is supported. Labels may appear before declarations and at the end of a compound statement. C++ The default mode is changed to -std=gnu++17 . The C++ library libstdc++ has improved C++17 support now. Several new C++20 features are implemented. Note that C++20 support is experimental. For more information about the features, see C++20 Language Features . The C++ front end has experimental support for some of the upcoming C++23 draft features. New warnings: -Wctad-maybe-unsupported , disabled by default, warns about performing class template argument deduction on a type with no deduction guides. -Wrange-loop-construct , enabled by -Wall , warns when a range-based for loop is creating unnecessary and resource inefficient copies. -Wmismatched-new-delete , enabled by -Wall , warns about calls to operator delete with pointers returned from mismatched forms of operator new or from other mismatched allocation functions. -Wvexing-parse , enabled by default, warns about the most vexing parse rule: the cases when a declaration looks like a variable definition, but the C++ language requires it to be interpreted as a function declaration. Architecture-specific improvements The 64-bit ARM architecture The Armv8-R architecture is supported through the -march=armv8-r option. GCC can autovectorize operations performing addition, subtraction, multiplication, and the accumulate and subtract variants on complex numbers. AMD and Intel 64-bit architectures The following Intel CPUs are supported: Sapphire Rapids, Alder Lake, and Rocket Lake. New ISA extension support for Intel AVX-VNNI is added. The -mavxvnni compiler switch controls the AVX-VNNI intrinsics. AMD CPUs based on the znver3 core are supported with the new -march=znver3 option. Three microarchitecture levels defined in the x86-64 psABI supplement are supported with the new -march=x86-64-v2 , -march=x86-64-v3 , and -march=x86-64-v4 options. (BZ#1946782) GCC Toolset 11: dwz now supports DWARF 5 In GCC Toolset 11, the dwz tool now supports the DWARF Version 5 debugging format. (BZ#1948709) GCC Toolset 11: GCC now supports the AIA user interrupts In GCC Toolset 11, GCC now supports the Accelerator Interfacing Architecture (AIA) user interrupts. (BZ#1927516) GCC Toolset 11: Generic SVE tuning defaults improved In GCC Toolset 11, generic SVE tuning defaults have been improved on the 64-bit ARM architecture. (BZ#1979715) SystemTap rebased to version 4.5 The SystemTap package has been updated to version 4.5. Notable bug fixes and enhancements include: 32-bit floating-point variables are automatically widened to double variables and, as a result, can be accessed directly as USDcontext variables. enum values can be accessed as USDcontext variables. The BPF uconversions tapset has been extended and includes more tapset functions to access values in user space, for example user_long_error() . Concurrency control has been significantly improved to provide stable operation on large servers. For further information, see the upstream SystemTap 4.5 release notes . ( BZ#1933889 ) elfutils rebased to version 0.185 The elfutils package has been updated to version 0.185. Notable bug fixes and enhancements include: The eu-elflint and eu-readelf tools now recognize and show the SHF_GNU_RETAIN and SHT_X86_64_UNWIND flags on ELF sections. The DEBUGINFOD_SONAME macro has been added to debuginfod.h . This macro can be used with the dlopen function to load the libdebuginfod.so library dynamically from an application. A new function debuginfod_set_verbose_fd has been added to the debuginfod-client library. This function enhances the debuginfod_find_* queries functionality by redirecting the verbose output to a separate file. Setting the DEBUGINFOD_VERBOSE environment variable now shows more information about which servers the debuginfod client connects to and the HTTP responses of those servers. The debuginfod server provides a new thread-busy metric and more detailed error metrics to make it easier to inspect processes that run on the debuginfod server. The libdw library now transparently handles the DW_FORM_indirect location value so that the dwarf_whatform function returns the actual FORM of an attribute. To reduce network traffic, the debuginfod-client library stores negative results in a cache, and client objects can reuse an existing connection. ( BZ#1933890 ) Valgrind rebased to version 3.17.0 The Valgrind package has been updated to version 3.17.0. Notable bug fixes and enhancements include: Valgrind can read the DWARF Version 5 debugging format. Valgrind supports debugging queries to the debuginfod server. The ARMv8.2 processor instructions are partially supported. The Power ISA v.3.1 instructions on POWER10 processors are partially supported. The IBM z14 processor instructions are supported. Most IBM z15 instructions are supported. The Valgrind tool suite supports the miscellaneous-instruction-extensions facility 3 and the vector-enhancements facility 2 for the IBM z15 processor. As a result, Valgrind runs programs compiled with GCC -march=z15 correctly and provides improved performance and debugging experience. The --track-fds=yes option respects -q ( --quiet ) and ignores the standard file descriptors stdin , stdout , and stderr by default. To track the standard file descriptors, use the --track-fds=all option. The DHAT tool has two new modes of operation: --mode=copy and --mode=ad-hoc . ( BZ#1933891 ) Dyninst rebased to version 11.0.0 The Dyninst package has been updated to version 11.0.0. Notable bug fixes and enhancements include: Support for the debuginfod server and for fetching separate debuginfo files. Improved detection of indirect calls to procedure linkage table (PLT) stubs. Improved C++ name demangling. Fixed memory leaks during code emitting. ( BZ#1933893 ) DAWR functionality improved in GDB on IBM POWER10 With this enhancement, new hardware watchpoint capabilities are now enabled for GDB on the IBM POWER10 processors. For example, a new set of DAWR/DAWRX registers has been added. (BZ#1854784) GCC Toolset 11: GDB rebased to version 10.2 In GCC Toolset 11, the GDB package has been updated to version 10.2. Notable bug fixes and enhancements include: New features Multithreaded symbol loading is enabled by default on architectures that support this feature. This change provides better performance for programs with many symbols. Text User Interface (TUI) windows can be arranged horizontally. GDB supports debugging multiple target connections simultaneously but this support is experimental and limited. For example, you can connect each inferior to a different remote server that runs on a different machine, or you can use one inferior to debug a local native process or a core dump or some other process. New and improved commands A new tui new-layout name window weight [ window weight... ] command creates a new text user interface (TUI) layout, you can also specify a layout name and displayed windows. The improved alias [-a] [--] alias = command [ default-args ] command can specify default arguments when creating a new alias. The set exec-file-mismatch and show exec-file-mismatch commands set and show a new exec-file-mismatch option. When GDB attaches to a running process, this option controls how GDB reacts when it detects a mismatch between the current executable file loaded by GDB and the executable file used to start the process. Python API The gdb.register_window_type function implements new TUI windows in Python. You can now query dynamic types. Instances of the gdb.Type class can have a new boolean attribute dynamic and the gdb.Type.sizeof attribute can have value None for dynamic types. If Type.fields() returns a field of a dynamic type, the value of its bitpos attribute can be None . A new gdb.COMMAND_TUI constant registers Python commands as members of the TUI help class of commands. A new gdb.PendingFrame.architecture() method retrieves the architecture of the pending frame. A new gdb.Architecture.registers method returns a gdb.RegisterDescriptorIterator object, an iterator that returns gdb.RegisterDescriptor objects. Such objects do not provide the value of a register but help understand which registers are available for an architecture. A new gdb.Architecture.register_groups method returns a gdb.RegisterGroupIterator object, an iterator that returns gdb.RegisterGroup objects. Such objects help understand which register groups are available for an architecture. (BZ#1954332) GCC Toolset 11: SystemTap rebased to version 4.5 In GCC Toolset 11, the SystemTap package has been updated to version 4.5. Notable bug fixes and enhancements include: 32-bit floating-point variables are now automatically widened to double variables and, as a result, can be accessed directly as USDcontext variables. enum values can now be accessed as USDcontext variables. The BPF uconversions tapset has been extended and now includes more tapset functions to access values in user space, for example user_long_error() . Concurrency control has been significantly improved to provide stable operation on large servers. For further information, see the upstream SystemTap 4.5 release notes . ( BZ#1957944 ) GCC Toolset 11: elfutils rebased to version 0.185 In GCC Toolset 11, the elfutils package has been updated to version 0.185. Notable bug fixes and enhancements include: The eu-elflint and eu-readelf tools now recognize and show the SHF_GNU_RETAIN and SHT_X86_64_UNWIND flags on ELF sections. The DEBUGINFOD_SONAME macro has been added to debuginfod.h . This macro can be used with the dlopen function to load the libdebuginfod.so library dynamically from an application. A new function debuginfod_set_verbose_fd has been added to the debuginfod-client library. This function enhances the debuginfod_find_* queries functionality by redirecting the verbose output to a separate file. Setting the DEBUGINFOD_VERBOSE environment variable now shows more information about which servers the debuginfod client connects to and the HTTP responses of those servers. The debuginfod server provides a new thread-busy metric and more detailed error metrics to make it easier to inspect processes that run on the debuginfod server. The libdw library now transparently handles the DW_FORM_indirect location value so that the dwarf_whatform function returns the actual FORM of an attribute. The debuginfod-client library now stores negative results in a cache and client objects can reuse an existing connection. This way unnecessary network traffic when using the library is prevented. ( BZ#1957225 ) GCC Toolset 11: Valgrind rebased to version 3.17.0 In GCC Toolset 11, the Valgrind package has been updated to version 3.17.0. Notable bug fixes and enhancements include: Valgrind can now read the DWARF Version 5 debugging format. Valgrind now supports debugging queries to the debuginfod server. Valgrind now partially supports the ARMv8.2 processor instructions. Valgrind now supports the IBM z14 processor instructions. Valgrind now partially supports the Power ISA v.3.1 instructions on POWER10 processors. The --track-fds=yes option now respects -q ( --quiet ) and ignores the standard file descriptors stdin , stdout , and stderr by default. To track the standard file descriptors, use the --track-fds=all option. The DHAT tool now has two new modes of operation: --mode=copy and --mode=ad-hoc . ( BZ#1957226 ) GCC Toolset 11: Dyninst rebased to version 11.0.0 In GCC Toolset 11, the Dyninst package has been updated to version 11.0.0. Notable bug fixes and enhancements include: Support for the debuginfod server and for fetching separate debuginfo files. Improved detection of indirect calls to procedure linkage table (PLT) stubs. Improved C++ name demangling. Fixed memory leaks during code emitting. ( BZ#1957942 ) PAPI library support for Fujitsu A64FX added PAPI library support for Fujitsu A64FX has been added. With this feature, developers can collect hardware statistics. (BZ#1908126) The PCP package was rebased to 5.3.1 The Performance Co-Pilot (PCP) package has been rebased to version 5.3.1. This release includes bug fixes, enhancements, and new features. Notable changes include: Scalability improvements, which now support centrally logged performance metrics for hundreds of hosts ( pmlogger farms) and automatic monitoring with performance rules ( pmie farms). Resolved memory leaks in the pmproxy service and the libpcp_web API library, and added instrumentation and new metrics to pmproxy . A new pcp-ss tool for historical socket statistics. Improvements to the pcp-htop tool. Extensions to the over-the-wire PCP protocol which now support higher resolution timestamps. ( BZ#1922040 ) The grafana package was rebased to version 7.5.9 The grafana package has been rebased to version 7.5.9. Notable changes include: New time series panel (beta) New pie chart panel (beta) Alerting support for Loki Multiple new query transformations For more information, see What's New in Grafana v7.4 , What's New in Grafana v7.5 . ( BZ#1921191 ) The grafana-pcp package was rebased to 3.1.0 The grafana-pcp package has been rebased to version 3.1.0. Notable changes include: Performance Co-Pilot (PCP) Vector Checklist dashboards use a new time series panel, show units in graphs, and contain updated help texts. Adding pmproxy URL and hostspec variables to PCP Vector Host Overview and PCP Checklist dashboards. All dashboards display datasource selection. Marking all included dashboards as readonly. Adding compatibility with Grafana 8. ( BZ#1921190 ) grafana-container rebased to version 7.5.9 The rhel8/grafana container image provides Grafana. Notable changes include: The grafana package is now updated to version 7.5.9. The grafana-pcp package is now updated to version 3.1.0. The container now supports the GF_INSTALL_PLUGINS environment variable to install custom Grafana plugins at container startup The rebase updates the rhel8/grafana image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1971557 ) pcp-container rebased to version 5.3.1 The rhel8/pcp container image provides Performance Co-Pilot. The pcp-container package has been upgraded to version 5.3.1. Notable changes include: The pcp package is now updated to version 5.3.1. The rebase updates the rhel8/pcp image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1974912 ) The new pcp-ss PCP utility is now available The pcp-ss PCP utility reports socket statistics collected by the pmdasockets(1) PMDA. The command is compatible with many of the ss command line options and reporting formats. It also offers the advantages of local or remote monitoring in live mode and historical replay from a previously recorded PCP archive. ( BZ#1879350 ) Power consumption metrics now available in PCP The new pmda-denki Performance Metrics Domain Agent (PMDA) reports metrics related to power consumption. Specifically, it reports: Consumption metrics based on Running Average Power Limit (RAPL) readings, available on recent Intel CPUs Consumption metrics based on battery discharge, available on systems which have a battery (BZ#1629455) 4.13. Identity Management IdM now supports new password policy options With this update, Identity Management (IdM) supports additional libpwquality library options: --maxrepeat Specifies the maximum number of the same character in sequence. --maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). --dictcheck Checks if the password is a dictionary word. --usercheck Checks if the password contains the username. Use the ipa pwpolicy-mod command to apply these options. For example, to apply the user name check to all new passwords suggested by the users in the managers group: If any of the new password policy options are set, then the minimum length of passwords is 6 characters regardless of the value of the --minlength option. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator will not be applied. To ensure consistent behavior, upgrade or update all servers to RHEL 8.4 and later. (JIRA:RHELPLAN-89566) Improved the SSSD debug logging by adding a unique identifier tag for each request As SSSD processes requests asynchronously, it is not easy to follow log entries for individual requests in the backend logs, as messages from different requests are added to the same log file. To improve the readability of debug logs, a unique request identifier is now added to log messages in the form of RID#<integer> . This allows you to isolate logs pertaining to an individual request, and you can track requests from start to finish across log files from multiple SSSD components. For example, the following sample output from an SSSD log file shows the unique identifiers RID#3 and RID#4 for two different requests: (JIRA:RHELPLAN-92473) IdM now supports the automember and server Ansible modules With this update, the ansible-freeipa package contains the ipaautomember and ipaserver modules: Using the ipaautomember module, you can add, remove, and modify automember rules and conditions. As a result, future IdM users and hosts that meet the conditions will be assigned to IdM groups automatically. Using the ipaserver module, you can ensure various parameters of the presence or absence of a server in the IdM topology. You can also ensure that a replica is hidden or visible. (JIRA:RHELPLAN-96640) IdM performance baseline With this update, a RHEL 8.5 IdM server with 4 CPUs and 8GB of RAM has been tested to successfully enroll 130 IdM clients simultaneously. (JIRA:RHELPLAN-97145) SSSD Kerberos cache performance has been improved The System Security Services Daemon (SSSD) Kerberos Cache Manager (KCM) service now includes the new operation KCM_GET_CRED_LIST . This enhancement improves KCM performance by reducing the number of input and output operations while iterating through a credentials cache. ( BZ#1956388 ) SSSD now logs backtraces by default With this enhancement, SSSD now stores detailed debug logs in an in-memory buffer and appends them to log files when a failure occurs. By default, the following error levels trigger a backtrace: Level 0: fatal failures Level 1: critical failures Level 2: serious failures You can modify this behavior for each SSSD process by setting the debug_level option in the corresponding section of the sssd.conf configuration file: If you set the debugging level to 0, only level 0 events trigger a backtrace. If you set the debugging level to 1, levels 0 and 1 trigger a backtrace. If you set the debugging level to 2 or higher, events at level 0 through 2 trigger a backtrace. You can disable this feature per SSSD process by setting the debug_backtrace_enabled option to false in the corresponding section of sssd.conf : ( BZ#1949149 ) SSSD KCM now supports the auto-renewal of ticket granting tickets With this enhancement, you can now configure the System Security Services Daemon (SSSD) Kerberos Cache Manager (KCM) service to auto-renew ticket granting tickets (TGTs) stored in the KCM credential cache on an Identity Management (IdM) server. Renewals are only attempted when half of the ticket lifetime has been reached. To use auto-renewal, the key distribution center (KDC) on the IdM server must be configured to support renewable Kerberos tickets. You can enable TGT auto-renewal by modifying the [kcm] section of the /etc/sssd/sssd.conf file. For example, you can configure SSSD to check for renewable KCM-stored TGTs every 60 minutes and attempt auto-renewal if half of the ticket lifetime has been reached by adding the following options to the file: Alternatively, you can configure SSSD to inherit krb5 options for renewals from an existing domain: For more information, see the Renewals section of the sssd-kcm man page. ( BZ#1627112 ) samba rebased to version 4.14.4 Publishing printers in Active Directory (AD) has increased reliability, and additional printer features have been added to the published information in AD. Also, Samba now supports Windows drivers for the ARM64 architecture. The ctdb isnotrecmaster command has been removed. As an alternative, use ctdb pnn or the ctdb recmaster commands. The clustered trivial database (CTDB) ctdb natgw master and slave-only parameters have been renamed to ctdb natgw leader and follower-only . Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start Samba automatically updates its tdb database files. Note that Red Hat does not support downgrading tdb database files. After updating Samba, verify the /etc/samba/smb.conf file using the testparm utility. For further information about notable changes, read the upstream release notes before updating. ( BZ#1944657 ) The dnaInterval configuration attribute is now supported With this update, Red Hat Directory Server supports setting the dnaInterval attribute of the Distributed Numeric Assignment (DNA) plug-in in the cn= <DNA_config_entry> ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config entry. The DNA plug-in generates unique values for specified attributes. In a replication environment, servers can share the same range. To avoid overlaps on different servers, you can set the dnaInterval attribute to skip some values. For example, if the interval is 3 and the first number in the range is 1 , the number used in the range is 4 , then 7 , then 10 . For further details, see the dnaInterval parameter description. ( BZ#1938239 ) Directory Server rebased to version 1.4.3.27 The 389-ds-base packages have been upgraded to upstream version 1.4.3.27, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-24.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-23.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-22.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-21.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-20.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-19.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-18.html https://directory.fedoraproject.org/docs/389ds/releases/release-1-4-3-17.html ( BZ#1947044 ) Directory Server now supports temporary passwords This enhancement enables administrators to configure temporary password rules in global and local password policies. With these rules, you can configure that, when an administrator resets the password of a user, the password is temporary and only valid for a specific time and for a defined number of attempts. Additionally, you can configure that the expiration time does not start directly when the administrator changes the password. As a result, Directory Server allows the user only to authenticate using the temporary password for a finite period of time or attempts. Once the user authenticates successfully, Directory Server allows this user only to change its password. (BZ#1626633) IdM KDC now issues Kerberos tickets with PAC information to increase security With this update, to increase security, RHEL Identity Management (IdM) now issues Kerberos tickets with Privilege Attribute Certificate (PAC) information by default in new deployments. A PAC has rich information about a Kerberos principal, including its Security Identifier (SID), group memberships, and home directory information. As a result, Kerberos tickets are less susceptible to manipulation by malicious servers. SIDs, which Microsoft Active Directory (AD) uses by default, are globally unique identifiers that are never reused. SIDs express multiple namespaces: each domain has a SID, which is a prefix in the SID of each object. Starting with RHEL 8.5, when you install an IdM server or replica, the installation script generates SIDs for users and groups by default. This allows IdM to work with PAC data. If you installed IdM before RHEL 8.5, and you have not configured a trust with an AD domain, you may not have generated SIDs for your IdM objects. For more information about generating SIDs for your IdM objects, see Enabling Security Identifiers (SIDs) in IdM . By evaluating PAC information in Kerberos tickets, you can control resource access with much greater detail. For example, the Administrator account in one domain has a uniquely different SID than the Administrator account in any other domain. In an IdM environment with a trust to an AD domain, you can set access controls based on globally unique SIDs rather than simple user names or UIDs that might repeat in different locations, such as every Linux root account having a UID of 0. (Jira:RHELPLAN-159143) Directory Server provides monitoring settings that can prevent database corruption caused by lock exhaustion This update adds the nsslapd-db-locks-monitoring-enable parameter to the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry. If it is enabled, which is the default, Directory Server aborts all of the searches if the number of active database locks is higher than the percentage threshold configured in nsslapd-db-locks-monitoring-threshold . If an issue is encountered, the administrator can increase the number of database locks in the nsslapd-db-locks parameter in the cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config entry. This can prevent data corruption. Additionally, the administrator now can set a time interval in milliseconds that the thread sleeps between the checks. For further details, see the parameter descriptions in the Red Hat Directory Server Configuration, Command, and File Reference . ( BZ#1812286 ) Directory Server can exclude attributes and suffixes from the retro changelog database This enhancement adds the nsslapd-exclude-attrs and nsslapd-exclude-suffix parameters to Directory Server. You can set these parameters in the cn=Retro Changelog Plugin,cn=plugins,cn=config entry to exclude certain attributes or suffixes from the retro changelog database. ( BZ#1850664 ) Directory Server supports the entryUUID attribute With this enhancement, Directory Server supports the entryUUID attribute to be compliant with RFC 4530 . For example, with support for entryUUID , migrations from OpenLDAP are easier. By default, Directory Server adds the entryUUID attribute only to new entries. To manually add it to existing entries, use the dsconf <instance_name> plugin entryuuid fixup command. (BZ#1944494) Added a new message to help set up nsSSLPersonalitySSL Previously, many times happened that RHDS instance failed to start if the TLS certificate nickname didn't match the value of the configuration parameter nsSSLPersonalitySSL . This mismatch happened when customer copy the NSS DB from a instance or export the certificate's data but forget to set the nsSSLPersonalitySSL value accordingly. With this update, you can see log an additional message which should help a user to set up nsSSLPersonalitySSL correctly. ( BZ#1895460 ) 4.14. Desktop You can now connect to network at the login screen With this update, you can now connect to your network and configure certain network options at the GNOME Display Manager (GDM) login screen. As a result, you can log in as an enterprise user whose home directory is stored on a remote server. The login screen supports the following network options: Wired network Wireless network, including networks protected by a password Virtual Private Network (VPN) The login screen cannot open windows for additional network configuration. As a consequence, you cannot use the following network options at the login screen: Networks that open a captive portal Modem connections Wireless networks with enterprise WPA or WPA2 encryption that have not been preconfigured The network options at the login screen are disabled by default. To enable the network settings, use the following procedure: Create the /etc/polkit-1/rules.d/org.gnome.gdm.rules file with the following content: Restart GDM: Warning Restarting GDM terminates all your graphical user sessions. At the login screen, access the network settings in the menu on the right side of the top panel. ( BZ#1935261 ) Displaying the system security classification at login You can now configure the GNOME Display Manager (GDM) login screen to display an overlay banner that contains a predefined message. This is useful for deployments where the user is required to read the security classification of the system before logging in. To enable the overlay banner and configure a security classification message, use the following procedure: Install the gnome-shell-extension-heads-up-display package: Create the /etc/dconf/db/gdm.d/99-hud-message file with the following content: Replace the following values with text that describes the security classification of your system: Security classification title A short heading that identifies the security classification. Security classification description A longer message that provides additional details, such as references to various guidelines. Update the dconf database: Reboot the system. ( BZ#1651378 ) Flicker free boot is available You can now enable flicker free boot on your system. When flicker free boot is enabled, it eliminates abrupt graphical transitions during the system boot process, and the display does not briefly turn off during boot. To enable flicker free boot, use the following procedure: Configure the boot loader menu to hide by default: Update the boot loader configuration: On UEFI systems: On legacy BIOS systems: Reboot the system. As a result, the boot loader menu does not display during system boot, and the boot process is graphically smooth. To access the boot loader menu, repeatedly press Esc after turning on the system. (JIRA:RHELPLAN-99148) Updated support for emoji This release updates support for Unicode emoji characters from version 11 to version 13 of the emoji standard. As a result, you can now use more emoji characters on RHEL. The following packages that provide emoji functionality have been rebased: Package version Rebased to version cldr-emoji-annotation 33.1.0 38 google-noto-emoji-fonts 20180508 20200723 unicode-emoji 10.90.20180207 13.0 (JIRA:RHELPLAN-61867) You can set a default desktop session for all users With this update, you can now configure a default desktop session that is preselected for all users that have not logged in yet. If a user logs in using a different session than the default, their selection persists to their login. To configure the default session, use the following procedure: Copy the configuration file template: Edit the new /etc/accountsservice/user-templates/standard file. On the Session= gnome line, replace gnome with the session that you want to set as the default. Optional: To configure an exception to the default session for a certain user, follow these steps: Copy the template file to /var/lib/AccountsService/users/ user-name : In the new file, replace variables such as USD{USER} and USD{ID} with the user values. Edit the Session value. (BZ#1812788) 4.15. Graphics infrastructures Support for new GPUs The following new GPUs are now supported. Intel graphics: Alder Lake-S (ADL-S) Support for Alder Lake-S graphics is disabled by default. To enable it, add the following option to the kernel command line: Replace PCI_ID with either the PCI device ID of your Intel GPU, or with the * character to enable support for all alpha-quality hardware that uses the i915 driver. Elkhart Lake (EHL) Comet Lake Refresh (CML-R) with the TGP Platform Controller Hub (PCH) AMD graphics: Cezzane and Barcelo Sienna Cichlid Dimgrey Cavefish (JIRA:RHELPLAN-99040, BZ#1784132, BZ#1784136, BZ#1838558) The Wayland session is available with the proprietary NVIDIA driver The proprietary NVIDIA driver now supports hardware accelerated OpenGL and Vulkan rendering in Xwayland. As a result, you can now enable the GNOME Wayland session with the proprietary NVIDIA driver. Previously, only the legacy X11 session was available with the driver. X11 remains as the default session to avoid a possible disruption when updating from a version of RHEL. To enable Wayland with the NVIDIA proprietary driver, use the following procedure: Enable Direct Rendering Manager (DRM) kernel modesetting by adding the following option to the kernel command line: For details on enabling kernel options, see Configuring kernel command-line parameters . Reboot the system. The Wayland session is now available at the login screen. Optional: To avoid the loss of video allocations when suspending or hibernating the system, enable the power management option with the driver. For details, see Configuring Power Management Support . For the limitations related to the use of DRM kernel modesetting in the proprietary NVIDIA driver, see Direct Rendering Manager Kernel Modesetting (DRM KMS) . (JIRA:RHELPLAN-99049) Improvements to GPU support The following new GPU features are now enabled: Panel Self Refresh (PSR) is now enabled for Intel Tiger Lake and later graphics, which improves power consumption. Intel Tiger Lake, Ice Lake, and later graphics can now use High Bit Rate 3 (HBR3) mode with the DisplayPort Multi-Stream Transport (DP-MST) transmission method. This enables support for certain display capabilities with docks. Modesetting is now enabled on NVIDIA Ampere GPUs. This includes the following models: GA102, GA104, and GA107, including hybrid graphics systems. Most laptops with Intel integrated graphics and an NVIDIA Ampere GPU can now output to external displays using either GPU. (JIRA:RHELPLAN-99043) Updated graphics drivers The following graphics drivers have been updated: amdgpu ast i915 mgag2000 nouveau vmwgfx vmwgfx The Mesa library Vulkan packages (JIRA:RHELPLAN-99044) Intel Tiger Lake graphics are fully supported Intel Tiger Lake UP3 and UP4 Xe graphics, which were previously available as a Technology Preview, are now fully supported. Hardware acceleration is enabled by default on these GPUs. (BZ#1783396) 4.16. Red Hat Enterprise Linux system roles Users can configure the maximum root distance using the timesync_max_distance parameter With this update, the timesync RHEL system role is able to configure the tos maxdist of ntpd and the maxdistance parameter of the chronyd service using the new timesync_max_distance parameter. The timesync_max_distance parameter configures the maximum root distance to accept measurements from Network Time Protocol (NTP) servers. The default value is 0, which keeps the provider-specific defaults. ( BZ#1938016 ) Elasticsearch can now accept lists of servers Previously, the server_host parameter in Elasticsearch output for the Logging RHEL system role accepted only a string value for a single host. With this enhancement, it also accepts a list of strings to support multiple hosts. As a result, you can now configure multiple Elasticsearch hosts in one Elasticsearch output dictionary. ( BZ#1986463 ) Network Time Security (NTS) option added to the timesync RHEL system role The nts option was added to the timesync RHEL system role to enable NTS on client servers. NTS is a new security mechanism specified for Network Time Protocol (NTP), which can secure synchronization of NTP clients without client-specific configuration and can scale to large numbers of clients. The NTS option is supported only with the chrony NTP provider in version 4.0 and later. ( BZ#1970664 ) The SSHD RHEL system role now supports non-exclusive configuration snippets With this feature, you can configure SSHD through different roles and playbooks without rewriting the configurations by using namespaces. Namespaces are similar to a drop-in directory, and define non-exclusive configuration snippets for SSHD. As a result, you can use the SSHD RHEL system role from a different role, if you need to configure only a small part of the configuration and not the entire configuration file. ( BZ#1970642 ) The SELinux role can now manage SELinux modules The SElinux RHEL system role has the ability to manage SELinux modules. With this update, users can provide their own custom modules from .pp or .cil files, which allows for a more flexible SELinux policy management. ( BZ#1848683 ) Users can manage the chrony interleaved mode, NTP filtering, and hardware timestamping With this update, the timesync RHEL system role enables you to configure the Network Time Protocol (NTP) interleaved mode, additional filtering of NTP measurements, and hardware timestamping. The chrony package of version 4.0 adds support for these functionalities to achieve a highly accurate and stable synchronization of clocks in local networks. To enable the NTP interleaved mode, make sure the server supports this feature, and set the xleave option to yes for the server in the timesync_ntp_servers list. The default value is no . To set the number of NTP measurements per clock update, set the filter option for the NTP server you are configuring. The default value is 1 . To set the list of interfaces which should have hardware timestamping enabled for NTP, use the timesync_ntp_hwts_interfaces parameter. The special value ["*"] enables timestamping on all interfaces that support it. The default is [] . ( BZ#1938020 ) timesync role enables customization settings for chrony Previously, there was no way to provide customized chrony configuration using the timesync role. This update adds the timesync_chrony_custom_settings parameter, which enables users to to provide customized settings for chrony, such as: ( BZ#1938023 ) timesync role supports hybrid end-to-end delay mechanisms With this enhancement, you can use the new hybrid_e2e option in timesync_ptp_domains to enable hybrid end-to-end delay mechanisms in the timesync role. The hybrid end-to-end delay mechanism uses unicast delay requests, which are useful to reduce multicast traffic in large networks. ( BZ#1957849 ) ethtool now supports reducing the packet loss rate and latency Tx or Rx buffers are memory spaces allocated by a network adapter to handle traffic bursts. Properly managing the size of these buffers is critical to reduce the packet loss rate and achieve acceptable network latency. The ethtool utility now reduces the packet loss rate or latency by configuring the ring option of the specified network device. The list of supported ring parameters is: rx - Changes the number of ring entries for the Rx ring. rx-jumbo - Changes the number of ring entries for the Rx Jumbo ring. rx-mini - Changes the number of ring entries for the Rx Mini ring. tx - Changes the number of ring entries for the Tx ring. ( BZ#1959649 ) New ipv6_disabled parameter is now available With this update, you can now use the ipv6_disabled parameter to disable ipv6 when configuring addresses. ( BZ#1939711 ) RHEL system roles now support VPN management Previously, it was difficult to set up secure and properly configured IPsec tunneling and virtual private networking (VPN) solutions on Linux. With this enhancement, you can use the VPN RHEL system role to set up and configure VPN tunnels for host-to-host and mesh connections more easily across large numbers of hosts. As a result, you have a consistent and stable configuration interface for VPN and IPsec tunneling configuration within the RHEL system roles project. ( BZ#1943679 ) The storage RHEL system role now supports filesystem relabel Previously, the storage role did not support relabelling. This update fixes the issue, providing support to relabel the filesystem label. To do this, set a new label string to the fs_label parameter in storage_volumes . ( BZ#1876315 ) Support for volume sizes expressed as a percentage is available in the storage system role This enhancement adds support to the storage RHEL system role to express LVM volume sizes as a percentage of the pool's total size. You can specify the size of LVM volumes as a percentage of the pool/VG size, for example: 50% in addition to the human-readable size of the file system, for example, 10g , 50 GiB . ( BZ#1894642 ) New Ansible Role for Microsoft SQL Server Management The new microsoft.sql.server role is designed to help IT and database administrators automate processes involved with setup, configuration, and performance tuning of SQL Server on Red Hat Enterprise Linux. ( BZ#2013853 ) RHEL system roles do not support Ansible 2.8 With this update, support for Ansible 2.8 is no longer supported because the version is past the end of the product life cycle. The RHEL system roles support Ansible 2.9. ( BZ#1989199 ) The postfix role of RHEL system roles is fully supported Red Hat Enterprise Linux system roles provides a configuration interface for Red Hat Enterprise Linux subsystems, which makes system configuration easier through the inclusion of Ansible Roles. This interface enables managing system configurations across multiple versions of Red Hat Enterprise Linux, as well as adopting new major releases. The rhel-system-roles packages are distributed through the AppStream repository. As of RHEL 8.5, the postfix role is fully supported. For more information, see the Knowledgebase article about RHEL system roles . ( BZ#1812552 ) 4.17. Virtualization Enhancements to managing virtual machines in the web console The Virtual Machines (VM) section of the RHEL 8 web console has been redesigned for a better user experience. In addition, the following changes and features have also been introduced: A single page now includes all the relevant VM information, such as VM status, disks, networks, or console information. You can now live migrate a VM using the web console The web console now allows editing the MAC address of a VM's network interface You can use the web console to view a list of host devices attached to a VM (JIRA:RHELPLAN-79074) zPCI device assignment It is now possible to attach zPCI devices as mediated devices to virtual machines (VMs) hosted on RHEL 8 running on IBM Z hardware. For example, this enables the use of NVMe flash drives in VMs. (JIRA:RHELPLAN-59528) 4.18. Supportability sos rebased to version 4.1 The sos package has been upgraded to version 4.1, which provides multiple bug fixes and enhancements. Notable enhancements include: Red Hat Update Infrastructure ( RHUI ) plugin is now natively implemented in the sos package. With the rhui-debug.py python binary, sos can collect reports from RHUI including, for example, the main configuration file, the rhui-manager log file, or the installation configuration. sos introduces the --cmd-timeout global option that sets manually a timeout for a command execution. The default value (-1) defers to the general command timeout, which is 300 seconds. ( BZ#1928679 ) 4.19. Containers Default container image signature verification is now available Previously, the policy YAML files for the Red Hat Container Registries had to be manually created in the /etc/containers/registries.d/ directory. Now, the registry.access.redhat.com.yaml and registry.redhat.io.yaml files are included in the containers-common package. You can now use the podman image trust command to verify the container image signatures on RHEL. (JIRA:RHELPLAN-75166) The container-tools:rhel8 module has been updated The container-tools:rhel8 module, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides a list of bug fixes and enhancements over the version. (JIRA:RHELPLAN-76515) The containers-common package is now available The containers-common package has been added to the container-tools:rhel8 module. The containers-common package contains common configuration files and documentation for container tools ecosystem, such as Podman, Buildah and Skopeo. (JIRA:RHELPLAN-77542) Native overlay file system support in the kernel is now available The overlay file system support is now available from kernel 5.11. The non-root users will have native overlay performance even when running rootless (as a user). Thus, this enhancement provides better performance to non-root users who wish to use overlayfs without the need for bind mounting. (JIRA:RHELPLAN-77241) A podman container image is now available The registry.redhat.io/rhel8/podman container image, previously available as a Technology Preview, is now fully supported. The registry.redhat.io/rhel8/podman container image is a containerized implementation of the podman package. The podman tool manages containers and images, volumes mounted into those containers, and pods made of groups of containers. (JIRA:RHELPLAN-57941) Universal Base Images are now available on Docker Hub Previously, Universal Base Images were only available from the Red Hat container catalog. Now, Universal Base Images are also available from Docker Hub. For more information, see Red Hat Brings Red Hat Universal Base Image to Docker Hub . (JIRA:RHELPLAN-85064) CNI plugins in Podman are now available CNI plugins are now available to use in Podman rootless mode. The rootless networking commands now work without any other requirement on the system. ( BZ#1934480 ) Podman has been updated to version 3.3.1 The Podman utility has been updated to version 3.3.1. Notable enhancements include: Podman now supports restarting containers created with the --restart option after the system is rebooted. The podman container checkpoint and podman container restore commands now support checkpointing and restoring containers that are in pods and restoring those containers into pods. Further, the podman container restore command now supports the --publish option to change ports forwarded to a container restored from an exported checkpoint. (JIRA:RHELPLAN-87877) The crun OCI runtime is now available The crun OCI runtime is now available for the container-tools:rhel8 module. The crun container runtime supports an annotation that enables the container to access the rootless user's additional groups. This is useful for container operations when volume mounting in a directory where setgid is set, or where the user only has group access. (JIRA:RHELPLAN-75164) The podman UBI image is now available The registry.access.redhat.com/ubi8/podman is now available as a part of UBI. (JIRA:RHELPLAN-77489) The container-tools:rhel8 module has been updated The container-tools:rhel8 module, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides a list of bug fixes and enhancements over the version. For more details, see the RHEA-2022:0352 . ( BZ#2009153 ) The ubi8/nodejs-16 and ubi8/nodejs-16-minimal container images are now fully supported The ubi8/nodejs-16 and ubi8/nodejs-16-minimal container images, previously available as a Technology Preview, are fully supported with the release of the RHBA-2021:5260 advisory. These container images include Node.js 16.13 , which is a Long Term Support (LTS) version. ( BZ#2001020 ) | [
"[[customizations.filesystem]] mountpoint = \"MOUNTPOINT\" size = MINIMUM-PARTITION-SIZE",
"yum install modulemd-tools",
"cipher@SSH = AES-256-CBC+",
"cipher@libssh = -*-CBC",
"-a always, exit -F arch=b32 -S chown, fchown, fchownat, lchown -F auid>=1000 -F auid!=unset -F key=perm_mod",
"-a always, exit -F arch=b32 -S unlink, unlinkat, rename, renameat, rmdir -F auid>=1000 -F auid!=unset -F key=delete",
"-a always, exit -F arch=b32 -S chown, fchown, fchownat, lchown -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccesful-perm-change",
"-a always, exit -F arch=b32 -S unlink, unlinkat, rename, renameat -F auid>=1000 -F auid!=unset -F exit=-EACCES -F auid>=1000 -F auid!=unset -F key=unsuccessful-delete",
"nmcli connection modify enp1s0 ethtool.pause-autoneg no ethtool.pause-rx true ethtool.pause-tx true",
"sudo nmcli c add type ethernet ifname eth1 connection.id eth1 802-3-ethernet.accept-all-mac-addresses true",
"[main] firewall-backend=nftables",
"systemctl reload NetworkManager",
"yum module install nodejs:16",
"yum module install ruby:3.0",
">>> def reformat_ip(address): return '.'.join(part.lstrip('0') if part != '0' else part for part in address.split('.')) >>> reformat_ip('0127.0.0.1') '127.0.0.1'",
"def reformat_ip(address): parts = [] for part in address.split('.'): if part != \"0\": part = part.lstrip('0') parts.append(part) return '.'.join(parts)",
"yum module install nginx:1.20",
"yum install gcc-toolset-11",
"scl enable gcc-toolset-11 tool",
"scl enable gcc-toolset-11 bash",
"podman pull registry.redhat.io/<image_name>",
"podman pull registry.redhat.io/rhel8/grafana",
"podman pull registry.redhat.io/rhel8/pcp",
"*USD ipa pwpolicy-mod --usercheck=True managers*",
"(2021-07-26 18:26:37): [be[testidm.com]] [dp_req_destructor] (0x0400): RID#3 Number of active DP request: 0 (2021-07-26 18:26:37): [be[testidm.com]] [dp_req_reply_std] (0x1000): RID#3 DP Request AccountDomain #3: Returning [Internal Error]: 3,1432158301,GetAccountDomain() not supported (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): RID#4 DP Request Account #4: REQ_TRACE: New request. sssd.nss CID #1 Flags [0x0001]. (2021-07-26 18:26:37): [be[testidm.com]] [dp_attach_req] (0x0400): RID#4 Number of active DP request: 1",
"[sssd] debug_backtrace_enabled = true debug_level=0 [nss] debug_backtrace_enabled = false [domain/idm.example.com] debug_backtrace_enabled = true debug_level=2",
"[kcm] tgt_renewal = true krb5_renew_interval = 60m",
"[kcm] tgt_renewal = true tgt_renewal_inherit = domain-name",
"The _samba_ packages have been upgraded to upstream version 4.14.4, which provides bug fixes and enhancements over the previous version:",
"polkit.addRule(function(action, subject) { if (action.id == \"org.freedesktop.NetworkManager.network-control\" && subject.user == \"gdm\") { return polkit.Result.YES; } return polkit.Result.NOT_HANDLED; });",
"systemctl restart gdm",
"yum install gnome-shell-extension-heads-up-display",
"[org/gnome/shell] enabled-extensions=['[email protected]'] [org/gnome/shell/extensions/heads-up-display] message-heading=\" Security classification title \" message-body=\" Security classification description \"",
"dconf update",
"grub2-editenv - set menu_auto_hide=1",
"grub2-mkconfig -o /etc/grub2-efi.cfg",
"grub2-mkconfig -o /etc/grub2.cfg",
"cp /usr/share/accountsservice/user-templates/standard /etc/accountsservice/user-templates/standard",
"cp /usr/share/accountsservice/user-templates/standard /var/lib/AccountsService/users/ user-name",
"i915.force_probe= PCI_ID",
"nvidia-drm.modeset=1",
"timesync_chrony_custom_settings: - \"logdir /var/log/chrony\" - \"log measurements statistics tracking\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.5_release_notes/new-features |
8.2. Moving Resources Due to Failure | 8.2. Moving Resources Due to Failure When you create a resource, you can configure the resource so that it will move to a new node after a defined number of failures by setting the migration-threshold option for that resource. Once the threshold has been reached, this node will no longer be allowed to run the failed resource until: The administrator manually resets the resource's failcount using the pcs resource failcount command. The resource's failure-timeout value is reached. The value of migration-threshold is set to INFINITY by default. INFINITY is defined internally as a very large but finite number. A value of 0 disables the migration-threshold feature. Note Setting a migration-threshold for a resource is not the same as configuring a resource for migration, in which the resource moves to another location without loss of state. The following example adds a migration threshold of 10 to the resource named dummy_resource , which indicates that the resource will move to a new node after 10 failures. You can add a migration threshold to the defaults for the whole cluster with the following command. To determine the resource's current failure status and limits, use the pcs resource failcount command. There are two exceptions to the migration threshold concept; they occur when a resource either fails to start or fails to stop. If the cluster property start-failure-is-fatal is set to true (which is the default), start failures cause the failcount to be set to INFINITY and thus always cause the resource to move immediately. For information on the start-failure-is-fatal option, see Table 12.1, "Cluster Properties" . Stop failures are slightly different and crucial. If a resource fails to stop and STONITH is enabled, then the cluster will fence the node in order to be able to start the resource elsewhere. If STONITH is not enabled, then the cluster has no way to continue and will not try to start the resource elsewhere, but will try to stop it again after the failure timeout. | [
"pcs resource meta dummy_resource migration-threshold=10",
"pcs resource defaults migration-threshold=10"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-failure_migration-HAAR |
Chapter 2. The Cargo build tool | Chapter 2. The Cargo build tool Cargo is a build tool and front end for the Rust compiler rustc as well as a package and dependency manager. It allows Rust projects to declare dependencies with specific version requirements, resolves the full dependency graph, downloads packages, and builds as well as tests your entire project. Rust Toolset is distributed with Cargo 1.71.1. 2.1. The Cargo directory structure and file placements The Cargo build tool uses set conventions for defining the directory structure and file placement within a Cargo package. Running the cargo new command generates the package directory structure and templates for both a manifest and a project file. By default, it also initializes a new Git repository in the package root directory. For a binary program, Cargo creates a directory project_name containing a text file named Cargo.toml and a subdirectory src containing a text file named main.rs . Additional resources For more information on the Cargo directory structure, see The Cargo Book - Package Layout . For in-depth information about Rust code organization, see The Rust Programming Language - Managing Growing Projects with Packages, Crates, and Modules . 2.2. Creating a Rust project Create a new Rust project that is set up according to the Cargo conventions. For more information on Cargo conventions, see Cargo directory structure and file placements . Procedure Create a Rust project by running the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with your project name. On Red Hat Enterprise Linux 9: Replace < project_name > with your project name. Note To edit the project code, edit the main executable file main.rs and add new source files to the src subdirectory. Additional resources For information on configuring your project and adding dependencies, see Configuring Rust project dependencies . 2.3. Creating a Rust library project Complete the following steps to create a Rust library project using the Cargo build tool. Procedure To create a Rust library project, run the following command: On Red Hat Enterprise Linux 8: Replace < project_name > with the name of your Rust project. On Red Hat Enterprise Linux 9: Replace < project_name > with the name of your Rust project. Note To edit the project code, edit the source file, lib.rs , in the src subdirectory. Additional resources Managing Growing Projects with Packages, Crates, and Modules 2.4. Building a Rust project Build your Rust project using the Cargo build tool. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. By default, projects are built and compiled in debug mode. For information on compiling your project in release mode, see Building a Rust project in release mode . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be built when you do not need to build an executable file, run: 2.5. Building a Rust project in release mode Build your Rust project in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo resolves all dependencies of your project, downloads missing dependencies, and compiles it using the rustc compiler. For information on compiling your project in debug mode, see Building a Rust project . Prerequisites An existing Rust project. For information on how to create a Rust project, see Creating a Rust project . Procedure To build the project in release mode, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To verify that your Rust program can be build when you do not need to build an executable file, run: 2.6. Running a Rust program Run your Rust project using the Cargo build tool. Cargo first rebuilds your project and then runs the resulting executable file. If used during development, the cargo run command correctly resolves the output path independently of the build mode. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure To run a Rust program managed as a project by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Note If your program has not been built yet, Cargo builds your program before running it. 2.7. Testing a Rust project Test your Rust program using the Cargo build tool. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . By default, Rust projects are tested in debug mode. For information on testing your project in release mode, see Testing a Rust project in release mode . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.8. Testing a Rust project in release mode Test your Rust program in release mode using the Cargo build tool. Release mode is optimizing your source code and can therefore increase compilation time while ensuring that the compiled binary will run faster. Use this mode to produce optimized artifacts suitable for release and production. Cargo first rebuilds your project and then runs the tests found in the project. Note that you can only test functions that are free, monomorphic, and take no arguments. The function return type must be either () or Result<(), E> where E: Error . For information on testing your project in debug mode, see Testing a Rust project . Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure Add the test attribute #[test] in front of your function. To run tests for a Rust project managed by Cargo in release mode, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on performing tests in your Rust project, see The Rust Reference - Testing attributes . 2.9. Configuring Rust project dependencies Configure the dependencies of your Rust project using the Cargo build tool. To specify dependencies for a project managed by Cargo, edit the file Cargo.toml in the project directory and rebuild your project. Cargo downloads the Rust code packages and their dependencies, stores them locally, builds all of the project source code including the dependency code packages, and runs the resulting executable. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Procedure In your project directory, open the file Cargo.toml . Move to the section labelled [dependencies] . Each dependency is listed on a new line in the following format: Rust code packages are called crates. Edit your dependencies. Rebuild your project by running: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Run your project by using the following command: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: Additional resources For more information on configuring Rust dependencies, see The Cargo Book - Specifying Dependencies . 2.10. Building documentation for a Rust project Use the Cargo tool to generate documentation from comments in your source code that are marked for extraction. Note that documentation comments are extracted only for public functions, variables, and members. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To mark comments for extraction, use three slashes /// and place your comment in the beginning of the line it is documenting. Cargo supports the Markdown language for your comments. To build project documentation using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: The generated documentation is located in the .target/doc directory. Additional resources For more information on building documentation using Cargo, see The Rust Programming Language - Making Useful Documentation Comments . 2.11. Compiling code into a WebAssembly binary with Rust on Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9 Beta Complete the following steps to install the WebAssembly standard library. Prerequisites Rust Toolset is installed. For more information, see Installing Rust Toolset . Procedure To install the WebAssembly standard library, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To use WebAssembly with Cargo, run: On Red Hat Enterprise Linux 8: Replace < command > with the Cargo command you want to run. On Red Hat Enterprise Linux 9: Replace < command > with the Cargo command you want to run. Additional resources For more information on WebAssembly, see the official Rust and WebAssembly documentation or the Rust and WebAssembly book. 2.12. Vendoring Rust project dependencies Create a local copy of the dependencies of your Rust project for offline redistribution and reuse using the Cargo build tool. This procedure is called vendoring project dependencies. The vendored dependencies including Rust code packages for building your project on a Windows operating system are located in the vendor directory. Vendored dependencies can be used by Cargo without any connection to the internet. Prerequisites A built Rust project. For information on how to build a Rust project, see Building a Rust project . Configured dependencies. For more information on configuring dependencies, see Configuring Rust project dependencies . Procedure To vendor your Rust project with dependencies using Cargo, run in the project directory: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.13. Additional resources For more information on Cargo, see the Official Cargo Guide . To display the manual page included in Rust Toolset, run: For Red Hat Enterprise Linux 8: For Red Hat Enterprise Linux 9: | [
"cargo new --bin < project_name >",
"cargo new --bin < project_name >",
"cargo new --lib < project_name >",
"cargo new --lib < project_name >",
"cargo build",
"cargo build",
"cargo check",
"cargo build --release",
"cargo build --release",
"cargo check",
"cargo run",
"cargo run",
"cargo test",
"cargo test",
"cargo test --release",
"cargo test --release",
"crate_name = version",
"cargo build",
"cargo build",
"cargo run",
"cargo run",
"cargo doc --no-deps",
"cargo doc --no-deps",
"yum install rust-std-static-wasm32-unknown-unknown",
"dnf install rust-std-static-wasm32-unknown-unknown",
"cargo < command > --target wasm32-unknown-unknown",
"cargo < command > --target wasm32-unknown-unknown",
"cargo vendor",
"cargo vendor",
"man cargo",
"man cargo"
] | https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_rust_1.71.1_toolset/assembly_the-cargo-build-tool |
Chapter 9. Postinstallation storage configuration | Chapter 9. Postinstallation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. 9.1. Dynamic provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning . 9.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 9.1. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 9.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 9.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 9.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 9.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 9.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 9.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 9.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 9.3. Deploy Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.12 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.12 deployment Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.12 in external mode Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Data Foundation 4.12 on VMware vSphere Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.12 using Amazon Web Services Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power(R) infrastructure Deploying OpenShift Data Foundation on IBM Power(R) Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z(R) infrastructure Deploying OpenShift Data Foundation on IBM Z(R) infrastructure Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster Monitoring Red Hat OpenShift Data Foundation 4.12 Resolve issues encountered during operations Troubleshooting OpenShift Data Foundation 4.12 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/post-install-storage-configuration |
Chapter 37. MetadataTemplate schema reference | Chapter 37. MetadataTemplate schema reference Used in: BuildConfigTemplate , DeploymentTemplate , InternalServiceTemplate , PodDisruptionBudgetTemplate , PodTemplate , ResourceTemplate , StatefulSetTemplate Full list of MetadataTemplate schema properties Labels and Annotations are used to identify and organize resources, and are configured in the metadata property. For example: # ... template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2 # ... The labels and annotations fields can contain any labels or annotations that do not contain the reserved string strimzi.io . Labels and annotations containing strimzi.io are used internally by AMQ Streams and cannot be configured. 37.1. MetadataTemplate schema properties Property Description labels Labels added to the OpenShift resource. map annotations Annotations added to the OpenShift resource. map | [
"template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-metadatatemplate-reference |
About | About Red Hat Advanced Cluster Security for Kubernetes 4.5 Welcome to Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/about/index |
Chapter 5. Configuration options | Chapter 5. Configuration options This chapter lists the available configuration options for AMQ Core Protocol JMS. JMS configuration options are set as query parameters on the connection URI. For more information, see Section 4.3, "Connection URIs" . 5.1. General options user The user name the client uses to authenticate the connection. password The password the client uses to authenticate the connection. clientID The client ID that the client applies to the connection. groupID The group ID that the client applies to all produced messages. autoGroup If enabled, generate a random group ID and apply it to all produced messages. preAcknowledge If enabled, acknowledge messages as soon as they are sent and before delivery is complete. This provides "at most once" delivery. It is disabled by default. blockOnDurableSend If enabled, when sending non-transacted durable messages, block until the remote peer acknowledges receipt. It is enabled by default. blockOnNonDurableSend If enabled, when sending non-transacted non-durable messages, block until the remote peer acknowledges receipt. It is disabled by default. blockOnAcknowledge If enabled, when acknowledging non-transacted received messages, block until the remote peer confirms acknowledgment. It is disabled by default. callTimeout The time in milliseconds to wait for a blocking call to complete. The default is 30000 (30 seconds). callFailoverTimeout When the client is in the process of failing over, the time in millisconds to wait before starting a blocking call. The default is 30000 (30 seconds). ackBatchSize The number of bytes a client can receive and acknowledge before the acknowledgement is sent to the broker. The default is 1048576 (1 MiB). dupsOKBatchSize When using the DUPS_OK_ACKNOWLEDGE acknowledgment mode, the size in bytes of acknowledgment batches. The default is 1048576 (1 MiB). transactionBatchSize When receiving messsages in a transaction, the size in bytes of acknowledgment batches. The default is 1048576 (1 MiB). cacheDestinations If enabled, cache destination lookups. It is disabled by default. 5.2. TCP options tcpNoDelay If enabled, do not delay and buffer TCP sends. It is enabled by default. tcpSendBufferSize The send buffer size in bytes. The default is 32768 (32 KiB). tcpReceiveBufferSize The receive buffer size in bytes. The default is 32768 (32 KiB). writeBufferLowWaterMark The limit in bytes below which the write buffer becomes writable. The default is 32768 (32 KiB). writeBufferHighWaterMark The limit in bytes above which the write buffer becomes non-writable. The default is 131072 (128 KiB). 5.3. SSL/TLS options sslEnabled If enabled, use SSL/TLS to authenticate and encrypt connections. It is disabled by default. keyStorePath The path to the SSL/TLS key store. A key store is required for mutual SSL/TLS authentication. If unset, the value of the javax.net.ssl.keyStore system property is used. keyStorePassword The password for the SSL/TLS key store. If unset, the value of the javax.net.ssl.keyStorePassword system property is used. trustStorePath The path to the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStore system property is used. trustStorePassword The password for the SSL/TLS trust store. If unset, the value of the javax.net.ssl.trustStorePassword system property is used. trustAll If enabled, trust the provided server certificate implicitly, regardless of any configured trust store. It is disabled by default. verifyHost If enabled, verify that the connection hostname matches the provided server certificate. It is disabled by default. enabledCipherSuites A comma-separated list of cipher suites to enable. If unset, the JVM default ciphers are used. enabledProtocols A comma-separated list of SSL/TLS protocols to enable. If unset, the JVM default protocols are used. 5.4. Failover options initialConnectAttempts The number of reconnect attempts allowed before the first successful connection and before the client discovers the broker topology. The default is 0, meaning only one attempt is allowed. failoverOnInitialConnection If enabled, attempt to connect to the backup server if the initial connection fails. It is disabled by default. reconnnectAttempts The number of reconnect attempts allowed before reporting the connection as failed. The default is -1, meaning no limit. retryInterval The time in milliseconds between reconnect attempts. The default is 2000 (2 seconds). retryIntervalMultiplier The multiplier used to grow the retry interval. The default is 1.0, meaning equal intervals. maxRetryInterval The maximum time in milliseconds between reconnect attempts. The default is 2000 (2 seconds). ha If enabled, track changes in the topology of HA brokers. The host and port from the URI is used only for the initial connection. After initial connection, the client receives the current failover endpoints and any updates resulting from topology changes. It is disabled by default. connectionTTL The time in milliseconds after which the connection is failed if the server sends no ping packets. The default is 60000 (1 minute). -1 disables the timeout. confirmationWindowSize The size in bytes of the command replay buffer. This is used for automatic session re-attachment on reconnect. The default is -1, meaning no automatic re-attachment. clientFailureCheckPeriod The time in milliseconds between checks for dead connections. The default is 30000 (30 seconds). -1 disables checking. 5.5. Flow control options consumerWindowSize The size in bytes of the per-consumer message prefetch buffer. The default is 1048576 (1 MiB). -1 means no limit. 0 disables prefetching. consumerMaxRate The maximum number of messages to consume per second. The default is -1, meaning no limit. producerWindowSize The requested size in bytes for credit to produce more messages. This limits the total amount of data in flight at one time. The default is 1048576 (1 MiB). -1 means no limit. producerMaxRate The maximum number of messages to produce per second. The default is -1, meaning no limit. 5.6. Load balancing options useTopologyForLoadBalancing If enabled, use the cluster topology for connection load balancing. It is enabled by default. connectionLoadBalancingPolicyClassName The class name of the connection load balancing policy. The default is org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy . 5.7. Large message options The client can enable large message support by setting a value for the property minLargeMessageSize . Any message larger than minLargeMessageSize is considered a large message. minLargeMessageSize The minimum size in bytes at which a message is treated as a large message. The default is 102400 (100 KiB). compressLargeMessages If enabled, compress large messages, as defined by minLargeMessageSize . It is disabled by default. Note If the compressed size of a large message is less than the value of minLargeMessageSize , the message is sent as a regular message. Therefore, it is not written to the broker's large-message data directory. 5.8. Threading options useGlobalPools If enabled, use one pool of threads for all ConnectionFactory instances. Otherwise, use a separate pool for each instance. It is enabled by default. threadPoolMaxSize The maximum number of threads in the general thread pool. The default is -1, meaning no limit. scheduledThreadPoolMaxSize The maximum number of threads in the thread pool for scheduled operations. The default is 5. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_core_protocol_jms_client/configuration_options |
C.2. Sibling Start Ordering and Resource Child Ordering | C.2. Sibling Start Ordering and Resource Child Ordering The Service resource determines the start order and the stop order of a child resource according to whether it designates a child-type attribute for a child resource as follows: Designates child-type attribute ( typed child resource) - If the Service resource designates a child-type attribute for a child resource, the child resource is typed . The child-type attribute explicitly determines the start and the stop order of the child resource. Does not designate child-type attribute ( non-typed child resource) - If the Service resource does not designate a child-type attribute for a child resource, the child resource is non-typed . The Service resource does not explicitly control the starting order and stopping order of a non-typed child resource. However, a non-typed child resource is started and stopped according to its order in /etc/cluster/cluster.conf . In addition, non-typed child resources are started after all typed child resources have started and are stopped before any typed child resources have stopped. Note The only resource to implement defined child resource type ordering is the Service resource. For more information about typed child resource start and stop ordering, see Section C.2.1, "Typed Child Resource Start and Stop Ordering" . For more information about non-typed child resource start and stop ordering, see Section C.2.2, "Non-typed Child Resource Start and Stop Ordering" . C.2.1. Typed Child Resource Start and Stop Ordering For a typed child resource, the type attribute for the child resource defines the start order and the stop order of each resource type with a number that can range from 1 to 100; one value for start, and one value for stop. The lower the number, the earlier a resource type starts or stops. For example, Table C.1, "Child Resource Type Start and Stop Order" shows the start and stop values for each resource type; Example C.2, "Resource Start and Stop Values: Excerpt from Service Resource Agent, service.sh " shows the start and stop values as they appear in the Service resource agent, service.sh . For the Service resource, all LVM children are started first, followed by all File System children, followed by all Script children, and so forth. Table C.1. Child Resource Type Start and Stop Order Resource Child Type Start-order Value Stop-order Value LVM lvm 1 9 File System fs 2 8 GFS2 File System clusterfs 3 7 NFS Mount netfs 4 6 NFS Export nfsexport 5 5 NFS Client nfsclient 6 4 IP Address ip 7 2 Samba smb 8 3 Script script 9 1 Example C.2. Resource Start and Stop Values: Excerpt from Service Resource Agent, service.sh Ordering within a resource type is preserved as it exists in the cluster configuration file, /etc/cluster/cluster.conf . For example, consider the starting order and stopping order of the typed child resources in Example C.3, "Ordering Within a Resource Type" . Example C.3. Ordering Within a Resource Type Typed Child Resource Starting Order In Example C.3, "Ordering Within a Resource Type" , the resources are started in the following order: lvm:1 - This is an LVM resource. All LVM resources are started first. lvm:1 ( <lvm name="1" .../> ) is the first LVM resource started among LVM resources because it is the first LVM resource listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:2 - This is an LVM resource. All LVM resources are started first. lvm:2 ( <lvm name="2" .../> ) is started after lvm:1 because it is listed after lvm:1 in the Service foo portion of /etc/cluster/cluster.conf . fs:1 - This is a File System resource. If there were other File System resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . ip:10.1.1.1 - This is an IP Address resource. If there were other IP Address resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . script:1 - This is a Script resource. If there were other Script resources in Service foo , they would start in the order listed in the Service foo portion of /etc/cluster/cluster.conf . Typed Child Resource Stopping Order In Example C.3, "Ordering Within a Resource Type" , the resources are stopped in the following order: script:1 - This is a Script resource. If there were other Script resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . ip:10.1.1.1 - This is an IP Address resource. If there were other IP Address resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . fs:1 - This is a File System resource. If there were other File System resources in Service foo , they would stop in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:2 - This is an LVM resource. All LVM resources are stopped last. lvm:2 ( <lvm name="2" .../> ) is stopped before lvm:1 ; resources within a group of a resource type are stopped in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . lvm:1 - This is an LVM resource. All LVM resources are stopped last. lvm:1 ( <lvm name="1" .../> ) is stopped after lvm:2 ; resources within a group of a resource type are stopped in the reverse order listed in the Service foo portion of /etc/cluster/cluster.conf . | [
"<special tag=\"rgmanager\"> <attributes root=\"1\" maxinstances=\"1\"/> <child type=\"lvm\" start=\"1\" stop=\"9\"/> <child type=\"fs\" start=\"2\" stop=\"8\"/> <child type=\"clusterfs\" start=\"3\" stop=\"7\"/> <child type=\"netfs\" start=\"4\" stop=\"6\"/> <child type=\"nfsexport\" start=\"5\" stop=\"5\"/> <child type=\"nfsclient\" start=\"6\" stop=\"4\"/> <child type=\"ip\" start=\"7\" stop=\"2\"/> <child type=\"smb\" start=\"8\" stop=\"3\"/> <child type=\"script\" start=\"9\" stop=\"1\"/> </special>",
"<service name=\"foo\"> <script name=\"1\" .../> <lvm name=\"1\" .../> <ip address=\"10.1.1.1\" .../> <fs name=\"1\" .../> <lvm name=\"2\" .../> </service>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clust-rsc-sibling-starting-order-ca |
5.6.2. Removing a Failover Domain | 5.6.2. Removing a Failover Domain To remove a failover domain, follow these steps: At the left frame of the the Cluster Configuration Tool , click the failover domain that you want to delete (listed under Failover Domains ). At the bottom of the right frame (labeled Properties ), click the Delete Failover Domain button. Clicking the Delete Failover Domain button causes a warning dialog box do be displayed asking if you want to remove the failover domain. Confirm that the failover domain identified in the warning dialog box is the one you want to delete and click Yes . Clicking Yes causes the failover domain to be removed from the list of failover domains under Failover Domains in the left frame of the Cluster Configuration Tool . At the Cluster Configuration Tool , perform one of the following actions depending on whether the configuration is for a new cluster or for one that is operational and running: New cluster - If this is a new cluster, choose File => Save to save the changes to the cluster configuration. Running cluster - If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to Cluster automatically saves the configuration change. If you do not want to propagate the change immediately, choose File => Save to save the changes to the cluster configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-config-remove-failoverdm-CA |
Chapter 19. Configuring security groups | Chapter 19. Configuring security groups Security groups are sets of IP filter rules that control network and protocol access to and from instances, such as ICMP to allow you to ping an instance, and SSH to allow you to connect to an instance. The security group rules are applied to all instances within a project. All projects have a default security group called default , which is used when you do not specify a security group for your instances. By default, the default security group allows all outgoing traffic and denies all incoming traffic from any source other than instances in the same security group. You can either add rules to the default security group or create a new security group for your project. You can apply one or more security groups to an instance during instance creation. To apply a security group to a running instance, apply the security group to a port attached to the instance. When you create a security group, you can choose stateful or stateless in ML2/OVN deployments. Note Stateless security groups are not supported in ML2/OVS deployments. Security groups are stateful by default and in most cases stateful security groups provide better control with less administrative overhead. A stateless security group can provide significant performance benefits, because it bypasses connection tracking in the underlying firewall. But stateless security groups require more security group rules than stateful security groups. Stateless security groups also offer less granularity in some cases. Stateless security group advantages Stateless security groups can be faster than stateful security groups Stateless security groups are the only viable security group option in applications that offload OpenFlow actions to hardware. Stateless security group disadvantages Stateless security group rules do not automatically allow returning traffic. For example, if you create a rule to allow outgoing TCP traffic from a port that is in a stateless security group, you must also create a rule that allows incoming replies. Stateful security groups automatically allow the incoming replies. Control over those incoming replies may not be as granular as the control provided by stateful security groups. In general, use the default stateful security group type unless your application is highly sensitive to performance or uses hardware offloading of OpenFlow actions. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . 19.1. Creating a security group You can create a new security group to apply to instances and ports within a project. Procedure Optional: To ensure the security group you need does not already exist, review the available security groups and their rules: Replace <sec_group> with the name or ID of the security group that you retrieved from the list of available security groups. . Create your security group: Optional: Include the --stateless option to create a stateless security group. Security groups are stateful by default. Note Only ML2/OVN deployments support stateless security groups. Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol. Separate port range values with a colon. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Note If you created a stateless security group, and you created a rule to allow outgoing TCP traffic from a port that is in the stateless security group, you must also create a rule that allows incoming replies. Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 19.2. Updating security group rules You can update the rules of any security group that you have access to. Procedure Retrieve the name or ID of the security group that you want to update the rules for: Determine the rules that you need to apply to the security group. Add rules to your security group: Replace <protocol> with the name of the protocol you want to allow to communicate with your instances. Optional: Replace <port-range> with the destination port or port range to open for the protocol. Required for IP protocols TCP, UDP, and SCTP. Set to -1 to allow all ports for the specified protocol.Separate port range values with a colon. Optional: You can allow access only from specified IP addresses by using --remote-ip to specify the remote IP address block, or --remote-group to specify that the rule only applies to packets from interfaces that are a member of the remote group. If using --remote-ip , replace <ip-address> with the remote IP address block. You can use CIDR notation. If using --remote-group , replace <group> with the name or ID of the existing security group. If neither option is specified, then access is allowed to all addresses, as the remote IP access range defaults (IPv4 default: 0.0.0.0/0 ; IPv6 default: ::/0 ). Specify the direction of network traffic the protocol rule applies to, either incoming ( ingress ) or outgoing ( egress ). If not specified, defaults to ingress . Replace <group_name> with the name or ID of the security group that you want to apply the rule to. Repeat step 3 until you have created rules for all the protocols that you want to allow to access your instances. The following example creates a rule to allow SSH connections to instances in the security group mySecGroup : 19.3. Deleting security group rules You can delete rules from a security group. Procedure Identify the security group that the rules are applied to: Retrieve IDs of the rules associated with the security group: Delete the rule or rules: Replace <rule> with the ID of the rule to delete. You can delete more than one rule at a time by specifying a space-delimited list of the IDs of the rules to delete. 19.4. Deleting a security group You can delete security groups that are not associated with any ports. Procedure Retrieve the name or ID of the security group that you want to delete: Retrieve a list of the available ports: Check each port for an associated security group: If the security group you want to delete is associated with any of the ports, then you must first remove the security group from the port. For more information, see Removing a security group from a port . Delete the security group: Replace <group> with the ID of the group that you want to delete. You can delete more than one group at a time by specifying a space-delimited list of the IDs of the groups to delete. 19.5. Configuring shared security groups When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data, you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group. You create security groups and Networking service role-based access control (RBAC) policies using the OpenStack Client. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Prerequisites You have at least two RHOSP projects that you want to share. In one of the projects, the current project , you have created a security group that you want to share with another project, the target project . In this example, the ping_ssh security group is created: Example Procedure Log in to the overcloud for the current project that contains the security group. Obtain the name or ID of the target project. Obtain the name or ID of the security group that you want to share between RHOSP projects. Using the identifiers from the steps, create an RBAC policy using the openstack network rbac create command. In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e . The ID of the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 : Example --target-project specifies the project that requires access to the security group. Tip You can share data between all projects by using the --target-all-projects argument instead of --target-project <target-project> . By default, only the admin user has this privilege. --action access_as_shared specifies what the project is allowed to do. --type indicates that the target object is a security group. 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24 is the ID of the particular security group which is being granted access to. The target project is able to access the security group when running the OpenStack Client security group commands, in addition to being able to bind to its ports. No other users (other than administrators and the owner) are able to access the security group. Tip To remove access for the target project, delete the RBAC policy that allows it using the openstack network rbac delete command. Additional resources Creating a security group in the Creating and managing instances guide security group create in the Command line interface reference network rbac create in the Command line interface reference | [
"openstack security group list openstack security group rule list <sec_group>",
"openstack security group create [--stateless] mySecGroup",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] mySecGroup",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group rule create --protocol <protocol> [--dst-port <port-range>] [--remote-ip <ip-address> | --remote-group <group>] [--ingress | --egress] <group_name>",
"openstack security group rule create --protocol tcp --dst-port 22 mySecGroup",
"openstack security group list",
"openstack security group show <sec-group>",
"openstack security group rule delete <rule> [<rule> ...]",
"openstack security group list",
"openstack port list",
"openstack port show <port-uuid> -c security_group_ids",
"openstack security group delete <group> [<group> ...]",
"openstack security group create ping_ssh",
"openstack project list",
"openstack security group list",
"openstack network rbac create --target-project 32016615de5d43bb88de99e7f2e26a1e --action access_as_shared --type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_red_hat_openstack_platform_networking/security-groups-configuring_rhosp-network |
Security APIs | Security APIs OpenShift Container Platform 4.14 Reference guide for security APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_apis/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/making-open-source-more-inclusive |
Chapter 11. Expanding the cluster | Chapter 11. Expanding the cluster You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API. Additional resources API connectivity failure when adding nodes to a cluster Configuring multi-architecture compute machines on an OpenShift cluster 11.1. Checking for multi-architecture support You must check that your cluster can support multiple architectures before you add a node with a different architecture. Procedure Log in to the cluster using the CLI. Check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o json | jq .metadata.metadata Verification If you see the following output, your cluster supports multiple architectures: { "release.openshift.io/architecture": "multi" } 11.2. Installing a multi-architecture cluster A cluster with an x86_64 control plane can support worker nodes that have two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads. For example, you can add arm64, IBM Power(R), or IBM Z(R) worker nodes to an existing OpenShift Container Platform cluster with an x86_64. The main steps of the installation are as follows: Create and register a multi-architecture cluster. Create an x86_64 infrastructure environment, download the ISO discovery image for x86_64, and add the control plane. The control plane must have the x86_64 architecture. Create an arm64, IBM Power(R), or IBM Z(R) infrastructure environment, download the ISO discovery images for arm64, IBM Power(R), or IBM Z(R), and add the worker nodes. Supported platforms The following table lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing. OpenShift Container Platform version Supported platforms Day 1 control plane architecture Day 2 node architecture 4.12.0 Microsoft Azure (TP) x86_64 arm64 4.13.0 Microsoft Azure Amazon Web Services Bare metal (TP) x86_64 x86_64 x86_64 arm64 arm64 arm64 4.14.0 Microsoft Azure Amazon Web Services Bare metal Google Cloud Platform IBM Power(R) IBM Z(R) x86_64 x86_64 x86_64 x86_64 x86_64 x86_64 arm64 arm64 arm64 arm64 ppc64le s390x Important Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Main steps Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "<version-number>-multi", 1 "cpu_architecture" : "multi" 2 "control_plane_count": "<number>" 3 "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Use the multi- option for the OpenShift Container Platform version number; for example, "4.12-multi" . 2 Set the CPU architecture to "multi" . 3 Set the number of control plane nodes to "3", "4", or "5". {sno-full} is not supported for a multi-cluster architecture. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "x86_64" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' When you reach the "Adding hosts" step of the installation, set host_role to master : Note For more information, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"master" } ' | jq Download the discovery image for the x86_64 architecture. Boot the x86_64 architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power(R)), s390x (for IBM Z(R)), or arm64 . For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.12", "cpu_architecture" : "arm64" "control_plane_count": "3" "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Repeat the "Adding hosts" step of the installation. This time, set host_role to worker : Note For more details, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Download the discovery image for the arm64, ppc64le or s390x architecture. Boot the architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Verification View the arm64, ppc64le, or s390x worker nodes in the cluster by running the following command: USD oc get nodes -o wide 11.3. Adding hosts with the web console You can add hosts to clusters that were created using the Assisted Installer . Important Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and later. When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks. Procedure Log in to OpenShift Cluster Manager and click the cluster that you want to expand. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed. Optional: Modify ignition files as needed. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. Select the host role. It can be either a worker or a control plane host. Start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation. When the host is successfully installed, it is listed as a host in the cluster web console. Important New hosts will be encrypted using the same method as the original cluster. 11.4. Adding hosts with the API You can add hosts to clusters using the Assisted Installer REST API. Prerequisites Install the Red Hat OpenShift Cluster Manager CLI ( ocm ). Log in to Red Hat OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you want to expand. Important When adding a control plane node during Day 2 operations, ensure that the new node shares the same subnet as the Day 1 network. The subnet is specified in the machineNetwork field of the install-config.yaml file. This requirement applies to cluster-managed networks such as bare metal or vSphere, and not to user-managed networks. Procedure Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the cluster by running the following commands: Set the USDCLUSTER_ID variable: Log in to the cluster and run the following command: USD export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Display the USDCLUSTER_ID variable output: USD echo USD{CLUSTER_ID} Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDCLUSTER_ID" \ '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": "<cluster_id>", 2 "name": "<openshift_cluster_name>" 3 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com . 2 Replace <cluster_id> with the USDCLUSTER_ID output from the substep. 3 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster host by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new host, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{API_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{API_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new host was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0 11.5. Replacing a control plane node in a healthy cluster You can replace a control plane (master) node in a healthy OpenShift Container Platform cluster that has three to five control plane nodes, by adding a new control plane node and removing an existing control plane node. If the cluster is unhealthy, you must peform additional operations before you can manage the control plane nodes. See Replacing a control plane node in an unhealthy cluster for more information. 11.5.1. Adding a new control plane node Add the new control plane node, and verify that it is healthy. In the example below, the new node is node-5 . Prerequisites You are using OpenShift Container Platform 4.11 or later. You have installed a healthy cluster with at least three control plane nodes. You have created a single control plane node to be added to the cluster for Day 2. Procedure Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node: USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending Approve all pending CSRs for the new node ( node-5 in this example): USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Important You must approve the CSRs to complete the installation. Confirm that the new control plane node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 Ready master 4h42m v1.24.0+3882f8f node-1 Ready master 4h27m v1.24.0+3882f8f node-2 Ready master 4h43m v1.24.0+3882f8f node-3 Ready worker 4h29m v1.24.0+3882f8f node-4 Ready worker 4h30m v1.24.0+3882f8f node-5 Ready master 105s v1.24.0+3882f8f Note The etcd operator requires a Machine custom resource (CR) that references the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three or more control plane nodes. Create the BareMetalHost and Machine CRs and link them to the new control plane's Node CR. Create the BareMetalHost CR with a unique .metadata.name value: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api Apply the BareMetalHost CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the BareMetalHost CR. Create the Machine CR using the unique .metadata.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: <cluster_name> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed 1 Replace <cluster_name> with the name of the specific cluster, for example, test-day2-1-6qv96 . To get the cluster name, run the following command: USD oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{"\n"}' Apply the Machine CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the Machine CR. Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: Copy the link-machine-and-node.sh script below to a local machine: #!/bin/bash # Credit goes to # https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object # and Node object. This is needed # in order to have IP address of # the Node present in the status of the Machine. set -e machine="USD1" node="USD2" if [ -z "USDmachine" ] || [ -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi node_name=USD(echo "USD{node}" | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo "USD{1}" \ | jq '.[] | select(. | .type == "InternalIP") | .address' \ | sed 's/"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob="" fi cat <<- EOF { "ip": "USD{ips[USDi]}", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\nTimed out waiting for %s' "USD{name}" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api "USD{node_name}" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json "USD{machine}") host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') set +e read -r -d '' host_patch << EOF { "status": { "hardware": { "hostname": "USD{hostname}", "nics": [ USD(print_nics "USD{addresses}") ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } EOF set -e echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ "USD{HOST_PROXY_API_PATH}/USD{host}/status" \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" Make the script executable: USD chmod +x link-machine-and-node.sh Run the script: USD bash link-machine-and-node.sh node-5 node-5 Note The first node-5 instance represents the machine, and the second represents the node. Confirm members of etcd by executing into one of the pre-existing control plane nodes: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-0 List etcd members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |76ae1d00| started |node-0 |192.168.111.24|192.168.111.24| false | |2c18942f| started |node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started |node-2 |192.168.111.25|192.168.111.25| false | |ead5f280| started |node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Monitor the etcd operator configuration process until completion: USD oc get clusteroperator etcd Example output (upon completion) NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m Confirm etcd health by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-0 Check endpoint health: # etcdctl endpoint health Example output 192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms Verify that all nodes are ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 Ready master 6h20m v1.24.0+3882f8f node-1 Ready master 6h20m v1.24.0+3882f8f node-2 Ready master 6h4m v1.24.0+3882f8f node-3 Ready worker 6h7m v1.24.0+3882f8f node-4 Ready worker 6h7m v1.24.0+3882f8f node-5 Ready master 99m v1.24.0+3882f8f Verify that the cluster Operators are all available: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m Verify that the cluster version is correct: USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5 11.5.2. Removing the existing control plane node Remove the control plane node that you are replacing. This is node-0 in the example below. Prerequisites You have added a new healthy control plane node. Procedure Delete the BareMetalHost CR of the pre-existing control plane node: USD oc delete bmh -n openshift-machine-api node-0 Confirm that the machine is unhealthy: USD oc get machine -A Example output NAMESPACE NAME PHASE AGE openshift-machine-api node-0 Failed 20h openshift-machine-api node-1 Running 20h openshift-machine-api node-2 Running 20h openshift-machine-api node-3 Running 19h openshift-machine-api node-4 Running 19h openshift-machine-api node-5 Running 14h Delete the Machine CR: USD oc delete machine -n openshift-machine-api node-0 machine.machine.openshift.io "node-0" deleted Confirm removal of the Node CR: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 19h v1.24.0+3882f8f node-3 Ready worker 19h v1.24.0+3882f8f node-4 Ready worker 19h v1.24.0+3882f8f node-5 Ready master 15h v1.24.0+3882f8f Check etcd-operator logs to confirm status of the etcd cluster: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource Remove the physical machine to allow the etcd operator to reconcile the cluster members: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd etcd-node-1 Monitor the progress of etcd operator reconciliation by checking members and endpoint health: # etcdctl member list -w table; etcdctl endpoint health Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started | node-2 |192.168.111.25|192.168.111.25| false | |ead4f280| started | node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms 11.6. Replacing a control plane node in an unhealthy cluster You can replace an unhealthy control plane (master) node in an OpenShift Container Platform cluster that has three to five control plane nodes, by removing the unhealthy control plane node and adding a new one. For details on replacing a control plane node in a healthy cluster, see Replacing a control plane node in a healthy cluster . 11.6.1. Removing an unhealthy control plane node Remove the unhealthy control plane node from the cluster. This is node-0 in the example below. Prerequisites You have installed a cluster with at least three control plane nodes. At least one of the control plane nodes is not ready. Procedure Check the node status to confirm that a control plane node is not ready: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-0 NotReady master 20h v1.24.0+3882f8f node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f Confirm in the etcd-operator logs that the cluster is unhealthy: USD oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operator Example output E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthy Confirm the etcd members by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl endpoint health reports an unhealthy member of the cluster: # etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy control plane by deleting the Machine custom resource (CR): USD oc delete machine -n openshift-machine-api node-0 Note The Machine and Node CRs might not be deleted because they are protected by finalizers. If this occurs, you must delete the Machine CR manually by removing all finalizers. Verify in the etcd-operator logs whether the unhealthy machine has been removed: USD oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operator Example output I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}] If you see that removal has been skipped, as in the above log example, manually remove the unhealthy etcdctl member: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl endpoint health reports an unhealthy member of the cluster: # etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy etcdctl member from the cluster: # etcdctl member remove 61e2a86084aafa62 Example output Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7 Verify that the unhealthy etcdctl member was removed by running the following command: # etcdctl member list -w table Example output +----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started | node-1 |192.168.111.26|192.168.111.26| false | | ead4f280 | started | node-2 |192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+ 11.6.2. Adding a new control plane node Add a new control plane node to replace the unhealthy node that you removed. In the example below, the new node is node-5 . Prerequisites You have installed a control plane node for Day 2. For more information, see Adding hosts with the web console or Adding hosts with the API . Procedure Retrieve pending Certificate Signing Requests (CSRs) for the new Day 2 control plane node: USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending Approve all pending CSRs for the new node ( node-5 in this example): USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note You must approve the CSRs to complete the installation. Confirm that the control plane node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 2m52s v1.24.0+3882f8f The etcd operator requires a Machine CR referencing the new node when the cluster runs with a Machine API. The machine API is automatically activated when the cluster has three control plane nodes. Create the BareMetalHost and Machine CRs and link them to the new control plane's Node CR. Important Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors in the etcd operator. Create the BareMetalHost CR with a unique .metadata.name value: apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api Apply the BareMetalHost CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the BareMetalHost CR. Create the Machine CR using the unique .metadata.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed Apply the Machine CR: USD oc apply -f <filename> 1 1 Replace <filename> with the name of the Machine CR. Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: Copy the link-machine-and-node.sh script below to a local machine: #!/bin/bash # Credit goes to # https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object # and Node object. This is needed # in order to have IP address of # the Node present in the status of the Machine. set -e machine="USD1" node="USD2" if [ -z "USDmachine" ] || [ -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi node_name=USD(echo "USD{node}" | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo "USD{1}" \ | jq '.[] | select(. | .type == "InternalIP") | .address' \ | sed 's/"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob="" fi cat <<- EOF { "ip": "USD{ips[USDi]}", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\nTimed out waiting for %s' "USD{name}" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api "USD{node_name}" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json "USD{machine}") host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') set +e read -r -d '' host_patch << EOF { "status": { "hardware": { "hostname": "USD{hostname}", "nics": [ USD(print_nics "USD{addresses}") ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } EOF set -e echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ "USD{HOST_PROXY_API_PATH}/USD{host}/status" \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" Make the script executable: USD chmod +x link-machine-and-node.sh Run the script: USD bash link-machine-and-node.sh node-5 node-5 Note The first node-5 instance represents the machine, and the second represents the node. Confirm members of etcd by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 List the etcdctl members: # etcdctl member list -w table Example output +---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started| node-1 |192.168.111.26|192.168.111.26| false | | ead4f280|started| node-2 |192.168.111.28|192.168.111.28| false | | 79153c5a|started| node-5 |192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+ Monitor the etcd operator configuration process until completion: USD oc get clusteroperator etcd Example output (upon completion) NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h Confirm etcdctl health by running the following commands: Open a remote shell session to the control plane node: USD oc rsh -n openshift-etcd node-1 Check endpoint health: # etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms Confirm the health of the nodes: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 40m v1.24.0+3882f8f Verify that the cluster Operators are all available: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h Verify that the cluster version is correct: USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5 11.7. Additional resources Authenticating with the REST API | [
"oc adm release info -o json | jq .metadata.metadata",
"{ \"release.openshift.io/architecture\": \"multi\" }",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"control_plane_count\": \"<number>\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"control_plane_count\": \"3\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq",
"oc get nodes -o wide",
"export API_URL=<api_url> 1",
"export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')",
"echo USD{CLUSTER_ID}",
"export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": \"<cluster_id>\", 2 \"name\": \"<openshift_cluster_name>\" 3 }')",
"CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')",
"INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')",
"curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'",
"https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12",
"curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'",
"2294ba03-c264-4f11-ac08-2f1bb2f8c296",
"HOST_ID=<host_id> 1",
"curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r",
"{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }",
"curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"",
"curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'",
"{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }",
"curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'",
"{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 Ready master 4h42m v1.24.0+3882f8f node-1 Ready master 4h27m v1.24.0+3882f8f node-2 Ready master 4h43m v1.24.0+3882f8f node-3 Ready worker 4h29m v1.24.0+3882f8f node-4 Ready worker 4h30m v1.24.0+3882f8f node-5 Ready master 105s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc apply -f <filename> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: <cluster_name> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc get infrastructure cluster -o=jsonpath='{.status.infrastructureName}{\"\\n\"}'",
"oc apply -f <filename> 1",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" ] || [ -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi node_name=USD(echo \"USD{node}\" | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo \"USD{1}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob=\"\" fi cat <<- EOF { \"ip\": \"USD{ips[USDi]}\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\\nTimed out waiting for %s' \"USD{name}\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api \"USD{node_name}\" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json \"USD{machine}\") host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') set +e read -r -d '' host_patch << EOF { \"status\": { \"hardware\": { \"hostname\": \"USD{hostname}\", \"nics\": [ USD(print_nics \"USD{addresses}\") ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } EOF set -e echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH \"USD{HOST_PROXY_API_PATH}/USD{host}/status\" -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"chmod +x link-machine-and-node.sh",
"bash link-machine-and-node.sh node-5 node-5",
"oc rsh -n openshift-etcd etcd-node-0",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |76ae1d00| started |node-0 |192.168.111.24|192.168.111.24| false | |2c18942f| started |node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started |node-2 |192.168.111.25|192.168.111.25| false | |ead5f280| started |node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m",
"oc rsh -n openshift-etcd etcd-node-0",
"etcdctl endpoint health",
"192.168.111.24 is healthy: committed proposal: took = 10.383651ms 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 Ready master 6h20m v1.24.0+3882f8f node-1 Ready master 6h20m v1.24.0+3882f8f node-2 Ready master 6h4m v1.24.0+3882f8f node-3 Ready worker 6h7m v1.24.0+3882f8f node-4 Ready worker 6h7m v1.24.0+3882f8f node-5 Ready master 99m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5",
"oc delete bmh -n openshift-machine-api node-0",
"oc get machine -A",
"NAMESPACE NAME PHASE AGE openshift-machine-api node-0 Failed 20h openshift-machine-api node-1 Running 20h openshift-machine-api node-2 Running 20h openshift-machine-api node-3 Running 19h openshift-machine-api node-4 Running 19h openshift-machine-api node-5 Running 14h",
"oc delete machine -n openshift-machine-api node-0 machine.machine.openshift.io \"node-0\" deleted",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 19h v1.24.0+3882f8f node-3 Ready worker 19h v1.24.0+3882f8f node-4 Ready worker 19h v1.24.0+3882f8f node-5 Ready master 15h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf",
"E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource",
"oc rsh -n openshift-etcd etcd-node-1",
"etcdctl member list -w table; etcdctl endpoint health",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |61e2a860| started | node-2 |192.168.111.25|192.168.111.25| false | |ead4f280| started | node-5 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-0 NotReady master 20h v1.24.0+3882f8f node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f",
"oc logs -n openshift-etcd-operator etcd-operator deployment/etcd-operator",
"E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, node-0 is unhealthy",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"oc delete machine -n openshift-machine-api node-0",
"oc logs -n openshift-etcd-operator etcd-operator deployment/ettcd-operator",
"I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine node-0 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.25}]",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |61e2a860| started | node-0 |192.168.111.25|192.168.111.25| false | |2c18942f| started | node-1 |192.168.111.26|192.168.111.26| false | |ead4f280| started | node-2 |192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+",
"etcdctl endpoint health",
"{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster",
"etcdctl member remove 61e2a86084aafa62",
"Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7",
"etcdctl member list -w table",
"+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started | node-1 |192.168.111.26|192.168.111.26| false | | ead4f280 | started | node-2 |192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+",
"oc get csr | grep Pending",
"csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:node-5 <none> Pending",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 2m52s v1.24.0+3882f8f",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: node-5 namespace: openshift-machine-api spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api",
"oc apply -f <filename> 1",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/node-5 finalizers: - machine.machine.openshift.io labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: node-5 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed",
"oc apply -f <filename> 1",
"#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" ] || [ -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi node_name=USD(echo \"USD{node}\" | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function print_nics() { local ips local eob declare -a ips readarray -t ips < <(echo \"USD{1}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') eob=',' for (( i=0; i<USD{#ips[@]}; i++ )); do if [ USD((i+1)) -eq USD{#ips[@]} ]; then eob=\"\" fi cat <<- EOF { \"ip\": \"USD{ips[USDi]}\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" }USD{eob} EOF done } function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((curr_time - start_time)) if [[ USDtime_diff -gt USDtimeout ]]; then printf '\\nTimed out waiting for %s' \"USD{name}\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api \"USD{node_name}\" -o json | jq -c '.status.addresses') machine_data=USD(oc get machines.machine.openshift.io -n openshift-machine-api -o json \"USD{machine}\") host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') set +e read -r -d '' host_patch << EOF { \"status\": { \"hardware\": { \"hostname\": \"USD{hostname}\", \"nics\": [ USD(print_nics \"USD{addresses}\") ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } EOF set -e echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH \"USD{HOST_PROXY_API_PATH}/USD{host}/status\" -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"",
"chmod +x link-machine-and-node.sh",
"bash link-machine-and-node.sh node-5 node-5",
"oc rsh -n openshift-etcd node-1",
"etcdctl member list -w table",
"+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started| node-1 |192.168.111.26|192.168.111.26| false | | ead4f280|started| node-2 |192.168.111.28|192.168.111.28| false | | 79153c5a|started| node-5 |192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+",
"oc get clusteroperator etcd",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h",
"oc rsh -n openshift-etcd node-1",
"etcdctl endpoint health",
"192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms",
"oc get Nodes",
"NAME STATUS ROLES AGE VERSION node-1 Ready master 20h v1.24.0+3882f8f node-2 Ready master 20h v1.24.0+3882f8f node-3 Ready worker 20h v1.24.0+3882f8f node-4 Ready worker 20h v1.24.0+3882f8f node-5 Ready master 40m v1.24.0+3882f8f",
"oc get ClusterOperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h",
"oc get ClusterVersion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_openshift_container_platform_with_the_assisted_installer/expanding-the-cluster |
Chapter 2. Client development prerequisites | Chapter 2. Client development prerequisites The following prerequisites are required for developing clients to use with AMQ Streams. You have a Red Hat account. You have a Kafka cluster running in AMQ Streams. Kafka brokers are configured with listeners for secure client connections. Topics have been created for your cluster. You have an IDE to develop and test your client. JDK 11 or later is installed. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/developing_kafka_client_applications/client-dev-prereqs-str |
Storage | Storage OpenShift Container Platform 4.10 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/storage/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/proc_providing-feedback-on-red-hat-documentation |
Schedule and quota APIs | Schedule and quota APIs OpenShift Container Platform 4.16 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/schedule_and_quota_apis/index |
Chapter 6. Managing Atomic Hosts | Chapter 6. Managing Atomic Hosts 6.1. Atomic Host The atomic command-line tool can be used to check the status of your Atomic Host system, perform upgrades and rollbacks or deploy a specific operating system tree. Use atomic host status to list the operating system trees downloaded on your system and check which version you are currently running. The asterisk ( * ) marks the currently running tree. To upgrade your system, use atomic host upgrade . This command will download the latest ostree available and will deploy it after you reboot the system. When you upgrade again, the newly downloaded ostree will replace the oldest one. Upgrading can take a few minutes. To switch back to the other downloaded tree on your system, use atomic host rollback . This command is particularly useful when there is a problem after upgrade (for example the new packages break a service that you've configured) because it lets you quickly switch back to the state. You can use the -r option to initiate a reboot immediately: To deploy a specific version of an ostree, use atomic host deploy . You can specify a version or a commit ID if you know it. Use the --preview option to check the package difference between the specified version and your currently running tree. For finer tasks you can use the ostree tool to manage you Atomic Host. For example, if you are unsure about the version numbering of the trees, you can use the following commands to fetch the ostree logs and list the versions available: You can delete an ostree deployment using one of the following commands: The -p option causes the pending deployment to be removed, while -r removes the rollback deployment. To clear temporary files and cached data, use one of the following commands: The -m option deletes cached RPM repository metadata, while -b clears temporary files, but leaves deployments intact. Both the atomic and ostree tools have built-in detailed --help options, to check all commands available on the system, use atomic host --help or ostree --help . 6.2. Package Layering Using rpm-ostree install , you can add an RPM software packages that is not part of the original OSTree permanently to the system. With rpm-ostree override , you can override an existing RPM package in the base system layer with a different version of that package. These features are done using the package layering feature. Package layering is useful when you need to install a certain package on a single machine, without affecting other machines that share the same OSTree. An example use case of package layering is installing diagnostics tools, such as strace . An example of overriding a package in the base system is if you wanted to use a different version of the docker package. 6.2.1. Installing a new RPM package on a RHEL Atomic Host To install a layered package and its dependencies on RHEL Atomic Host, run the following command: To remove a layered package, use the following command: Running the rpm-ostree install or rpm-ostree uninstall does not immediately install or uninstall the packages. To actually install or uninstall the packages, you have two options: Reboot the system. Use LiveFS to apply the changes immediately. Note LiveFS is still a technology preview feature, so do not rely on it in production. If you are only installing packages, use the rpm-ostree ex livefs command. If you are deleting or upgrading the packages, use the rpm-ostree ex livefs --replace command. You can find out which packages have been manually installed on the system by running atomic host status . The following is an example of installing strace on RHEL Atomic Host and how to verify it is part of the OSTree. Just as with installing a package with yum , you must register and subscribe your Atomic Host system before installing packages: Check the current status of your Atomic Host's deployments: Install the strace package as follows: Check the status again to see the layered package created by installing strace. Note that the strace package does not appear to be installed yet: Consider several issues: Although the package was installed on its own layer, it does not yet appear as being installed on the system. At this point you need to apply the pending deployment by either rebooting or applying them immediately using rpm-ostree ex livefs . Before making that decision, however, take into account these notes and limitations: If you run rpm-ostree install several times consecutively without rebooting or applying changes live, only the most recent command will take effect. If you install wget and strace consecutively and reboot afterwards, only strace will be present after reboot. To add multiple packages onto a new deployment, specify them all on the same line with the command. For example: Installing packages which contain files owned by users other than root is currently not supported. For example, the httpd package contains files with a group ownership of apache , installing it will fail: After rpm-ostree install , do not use atomic host deploy or rpm-ostree deploy to deploy a specific version OSTree version older than 7.2.6. If you attempt to deploy to such a version after rpm-ostree install , the system will be left in a state where you will be unable to use atomic host upgrade or rpm-ostree upgrade to upgrade to the release. However, atomic host rollback or rpm-ostree rollback will still be successful and bring the system back to the deployment. Reboot or LiveFS: Either reboot for the deployments to take effect or use the livefs feature, to have them immediately take effect, as follows: Check again to see that the strace package is installed and note the status of deployments, including the new LiveCommit: At this point, you can go ahead and start using the installed software. For more information on rpm-ostree and Live updates, see the Project Atomic rpm-ostree 6.2.2. Downloading and caching RPMs for later installation The --download-only and --cache-only options allow to separate the rpm-ostree install process into two stages: Downloading and caching the layered RPMs. Installing from the cached data. These options enable you to download the RPMs at one time, and then install them later whenever you choose, even offline. 6.2.3. Updating the repository metadata The rpm-ostree refresh-md subcommand downloads and caches the latest repository metadata. It is similar to the yum makecache command for the Yum package manager. 6.2.4. Overriding an existing RPM package To override an RPM package that is in the Atomic base and install a different version, you use the rpm-ostree override command. Here's how it works: Copy the RPM package you want to use to the Atomic host. Include any dependent packages needed by the RPM as well. The packages can be upgrades or downgrades from the current packages. Run the rpm-ostree override command. Reboot the Atomic host for the change to take effect. Note See Locking the version of the docker package on RHEL Atomic Host for an example of how to use rpm-ostree override to replace the docker runtime in Atomic. Here's an example of replacing the openssh-server package (and dependent packages) on a RHEL Atomic Host. Get the RPM package (and dependent packages) you want to replace and put them in a directory on the Atomic Host. With the packages in the current directory (in this case, downgrades of openssh-server, openssh-clients, and openssh), type the following to replace those packages: Reboot the Atomic Host system: Check that the packages have been installed and are available: If you just want to go back to the package versions, you can use rpm-ostree override reset to do that. Use rpm-ostree override reset <packagename> to remove individual packages or rpm-ostree override reset --all to remove all overridden packages. 6.3. "ostree admin unlock" ostree admin unlock unlocks the current ostree deployment and allows packages to be installed temporarily by mounting a writable overlayfs on /usr . However, the packages installed afterwards will not persist after reboot. ostree admin unlock also has certain limitations and known issues with overlayfs and SELinux, so it should be used only for testing. For adding software, rpm-ostree install is recommended for long-term use. 6.4. System Containers and Super-Privileged Containers (SPCs) In some cases, containerized services or applications require that they are run from a container image that is built in a different than the default way for Docker-formatted containers. Such special case containers are the Super-Privileged Containers (SPCs), and the system containers. They are necessary in two situations: SPCs : When a certain container needs more privileges and access to the host. Super-Privileged Containers are run with special privileges to the host computer, and unlike the default Docker-formatted containers, are able to modify the host. Tools for monitoring and logging are containzerized in SPCs so they can have the access to the host they requires. Such SPCs are rsyslog , sadc , and the atomic-tools container. For detailed information about SPCs, see Running Super-Privileged Containers chapter from the Red Hat Enterprise Linux Atomic Host Managing Containers Guide. System Containers : A certain container needs to run independently of the docker daemon. System containers are a way to containerize services which are needed before the docker daemon is running. Such services configure the environment for docker, (for example setting up networking), so they can't be run by the docker daemon and because of that, they are not Docker-formatted images. They use runc for runtime, ostree for storage, skopeo for searching and pulling from a registry and systemd for management. A system container is executed from a systemd UNIT file using the runc utility. Additionally, containerizing such services is a way to make the ostree system image smaller. Such System Containers are etcd and flannel . For detailed information, see Running System Containers chapter from the Red Hat Enterprise Linux Atomic Host Managing Containers Guide. | [
"atomic host status State: idle Deployments: * rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard Version: 7.3 (2016-09-27 17:53:07) BaseCommit: d3fa3283db8c5ee656f78dcfc0fcffe6cd5aa06596dac6ec5e436352208a59cb Commit: f5e639ce8186386d74e2558e6a34f55a427d8f59412d47a907793e046875d8dd OSName: rhel-atomic-host rhel-atomic-host-ostree:rhel-atomic-host/7.2/x86_64/standard Version: 7.2.7 (2016-09-15 22:28:54) BaseCommit: dbbc8e805f0003d8e55658dc220f1fe1397caf80221cc050eeb1bbf44bef56a1 Commit: 5cd426fa86bd1652ecd8f7d489f89f13ecb7d36e66003b0d7669721cb79545a8 OSName: rhel-atomic-host",
"atomic host upgrade systemctl reboot",
"atomic host rollback -r",
"atomic host deploy <version/commit ID>",
"atomic host deploy 7.3 --preview",
"ostree pull --commit-metadata-only --depth -1 rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard ostree log rhel-atomic-host/7/x86_64/standard",
"rpm-ostree cleanup -r rpm-ostree cleanup -p",
"rpm-ostree -m rpm-ostree -b",
"rpm-ostree install <package>",
"rpm-ostree uninstall <package>",
"-bash-4.2# rpm-ostree status State: idle Deployments: ● rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) Commit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-13 17:46:26) Commit: c28cad0d4144d91a3c206574e9341cd5bdf7d34cfaa2acb74dd84c0bf022593a GPGSignature: 1 signature Signature made Thu 13 Jul 2017 01:54:13 PM EDT using RSA key ID 199E2F91FD431D51 Good signature from \"Red Hat, Inc. <[email protected]>\"",
"-bash-4.2# rpm-ostree install strace Checking out tree 846fb0e... done Importing metadata [===========================================] 100% Resolving dependencies... done Will download: 1 package (470.0 kB) Downloading from rhel-7-server-rpms: [=======================] 100% Importing: [===================================================] 100% Overlaying... done Writing rpmdb... done Writing OSTree commit... done Copying /etc changes: 20 modified, 5 removed, 43 added Transaction complete; bootconfig swap: yes deployment count change: 0 Freed objects: 388.5 MB Added: strace-4.12-4.el7.x86_64 Run \"systemctl reboot\" to start a reboot",
"-bash-4.2# rpm-ostree status State: idle Deployments: rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) BaseCommit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a LayeredPackages: strace ● rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) Commit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a",
"rpm -q strace package strace is not installed",
"rpm-ostree install wget strace",
"rpm-ostree install httpd notice: pkg-add is a preview command and subject to change. Downloading metadata: [====================] 100% Resolving dependencies... done error: Unpacking httpd-2.4.6-40.el7_2.4.x86_64: Non-root ownership currently unsupported: path \"/run/httpd\" marked as root:apache)",
"rpm-ostree ex livefs notice: \"livefs\" is an experimental command and subject to change. Diff Analysis: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a => 97f937f3789d0f25b887bcd4fcc03d33b76ee4c87095af48c602b5826519ce1b Files: modified: 0 removed: 0 added: 11 Packages: modified: 0 removed: 0 added: 1 Preparing new rollback matching currently booted deployment Copying /etc changes: 20 modified, 5 removed, 43 added Transaction complete; bootconfig swap: yes deployment count change: 1 Overlaying /usr... done",
"rpm -q strace strace-4.12-4.el7.x86_64 rpm-ostree status State: idle Deployments: rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) BaseCommit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a Commit: 97f937f3789d0f25b887bcd4fcc03d33b76ee4c87095af48c602b5826519ce1b LayeredPackages: strace ● rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) BootedCommit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a LiveCommit: 97f937f3789d0f25b887bcd4fcc03d33b76ee4c87095af48c602b5826519ce1b rhelah-7.4:rhel-atomic-host/7/x86_64/standard Version: 7.4.0 (2017-07-28 00:26:01) Commit: 846fb0e18e65bd9a62fc9d952627413c6467c33c2d726449a1d7ad7690bbb93a",
"rpm-ostree override replace openssh-server-6.6.1p1-35.el7_3.x86_64.rpm openssh-clients-6.6.1p1-35.el7_3.x86_64.rpm openssh-6.6.1p1-35.el7_3.x86_64.rpm Checking out tree 5df677d... done Transaction complete; bootconfig swap: yes deployment count change: 1 Downgraded: openssh 7.4p1-16.el7 -> 6.6.1p1-35.el7_3 openssh-clients 7.4p1-16.el7 -> 6.6.1p1-35.el7_3 openssh-server 7.4p1-16.el7 -> 6.6.1p1-35.el7_3 Run \"systemctl reboot\" to start a reboot",
"systemctl reboot",
"atomic host status State: idle Deployments: ● ostree://rhel-atomic-host-ostree:rhel-atomic-host/7/x86_64/standard Version: 7.5.0 (2018-04-05 10:29:00) BaseCommit: 5df677dcfef08a87dd0ace55790e184a35716cf11260239216bfeba2eb7c60b0 ReplacedBasePackages: openssh openssh-server openssh-clients 7.4p1-16.el7 -> 6.6.1p1-35.el7_3 rpm -q openssh openssh-clients openssh-server openssh-6.6.1p1-35.el7_3.x86_64 openssh-clients-6.6.1p1-35.el7_3.x86_64 openssh-server-6.6.1p1-35.el7_3.x86_64"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/installation_and_configuration_guide/managing_atomic_hosts |
Chapter 5. Managing metrics | Chapter 5. Managing metrics You can collect metrics to monitor how cluster components and your own workloads are performing. 5.1. Understanding metrics In OpenShift Container Platform 4.9, cluster components are monitored by scraping metrics exposed through service endpoints. You can also configure metrics collection for user-defined projects. You can define the metrics that you want to provide for your own workloads by using Prometheus client libraries at the application level. In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can list all available metrics for a service by running a curl query against http://<endpoint>/metrics . For instance, you can expose a route to the prometheus-example-app example service and then run the following to view all of its available metrics: USD curl http://<example_app_endpoint>/metrics Example output # HELP http_requests_total Count of all HTTP requests # TYPE http_requests_total counter http_requests_total{code="200",method="get"} 4 http_requests_total{code="404",method="get"} 2 # HELP version Version information about this binary # TYPE version gauge version{version="v0.1.0"} 1 Additional resources See the Prometheus documentation for details on Prometheus client libraries. 5.2. Setting up metrics collection for user-defined projects You can create a ServiceMonitor resource to scrape metrics from a service endpoint in a user-defined project. This assumes that your application uses a Prometheus client library to expose metrics to the /metrics canonical name. This section describes how to deploy a sample service in a user-defined project and then create a ServiceMonitor resource that defines how that service should be monitored. 5.2.1. Deploying a sample service To test monitoring of a service in a user-defined project, you can deploy a sample service. Procedure Create a YAML file for the service configuration. In this example, it is called prometheus-example-app.yaml . Add the following deployment and service configuration details to the file: apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.1 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP This configuration deploys a service named prometheus-example-app in the user-defined ns1 project. This service exposes the custom version metric. Apply the configuration to the cluster: USD oc apply -f prometheus-example-app.yaml It takes some time to deploy the service. You can check that the pod is running: USD oc -n ns1 get pod Example output NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m 5.2.2. Specifying how a service is monitored To use the metrics exposed by your service, you must configure OpenShift Container Platform monitoring to scrape metrics from the /metrics endpoint. You can do this using a ServiceMonitor custom resource definition (CRD) that specifies how a service should be monitored, or a PodMonitor CRD that specifies how a pod should be monitored. The former requires a Service object, while the latter does not, allowing Prometheus to directly scrape metrics from the metrics endpoint exposed by a pod. This procedure shows you how to create a ServiceMonitor resource for a service in a user-defined project. Prerequisites You have access to the cluster as a user with the cluster-admin role or the monitoring-edit role. You have enabled monitoring for user-defined projects. For this example, you have deployed the prometheus-example-app sample service in the ns1 project. Note The prometheus-example-app sample service does not support TLS authentication. Procedure Create a YAML file for the ServiceMonitor resource configuration. In this example, the file is called example-app-service-monitor.yaml . Add the following ServiceMonitor resource configuration details: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-example-monitor name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app This defines a ServiceMonitor resource that scrapes the metrics exposed by the prometheus-example-app sample service, which includes the version metric. Note A ServiceMonitor resource in a user-defined namespace can only discover services in the same namespace. That is, the namespaceSelector field of the ServiceMonitor resource is always ignored. Apply the configuration to the cluster: USD oc apply -f example-app-service-monitor.yaml It takes some time to deploy the ServiceMonitor resource. You can check that the ServiceMonitor resource is running: USD oc -n ns1 get servicemonitor Example output NAME AGE prometheus-example-monitor 81m Additional resources Enabling monitoring for user-defined projects How to scrape metrics using TLS in a ServiceMonitor configuration in a user-defined project PodMonitor API ServiceMonitor API 5.3. Querying metrics The OpenShift Container Platform monitoring dashboard enables you to run Prometheus Query Language (PromQL) queries to examine metrics visualized on a plot. This functionality provides information about the state of a cluster and any user-defined workloads that you are monitoring. As a cluster administrator , you can query metrics for all core OpenShift Container Platform and user-defined projects. As a developer , you must specify a project name when querying metrics. You must have the required privileges to view metrics for the selected project. 5.3.1. Querying metrics for all projects as a cluster administrator As a cluster administrator or as a user with view permissions for all projects, you can access metrics for all default OpenShift Container Platform and user-defined projects in the Metrics UI. Note Only cluster administrators have access to the third-party UIs provided with OpenShift Container Platform Monitoring. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective within the OpenShift Container Platform web console, select Observe Metrics . Select Insert Metric at Cursor to view a list of predefined queries. To create a custom query, add your Prometheus Query Language (PromQL) query to the Expression field. To add multiple queries, select Add Query . To delete a query, select to the query, then choose Delete query . To disable a query from being run, select to the query and choose Disable query . Select Run Queries to run the queries that you have created. The metrics from the queries are visualized on the plot. If a query is invalid, the UI shows an error message. Note Queries that operate on large amounts of data might time out or overload the browser when drawing time series graphs. To avoid this, select Hide graph and calibrate your query using only the metrics table. Then, after finding a feasible query, enable the plot to draw the graphs. Optional: The page URL now contains the queries you ran. To use this set of queries again in the future, save this URL. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. 5.3.2. Querying metrics for user-defined projects as a developer You can access metrics for a user-defined project as a developer or as a user with view permissions for the project. In the Developer perspective, the Metrics UI includes some predefined CPU, memory, bandwidth, and network packet queries for the selected project. You can also run custom Prometheus Query Language (PromQL) queries for CPU, memory, bandwidth, network packet and application metrics for the project. Note Developers can only use the Developer perspective and not the Administrator perspective. As a developer, you can only query metrics for one project at a time. Developers cannot access the third-party UIs provided with OpenShift Container Platform monitoring that are for core platform components. Instead, use the Metrics UI for your user-defined project. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. You have enabled monitoring for user-defined projects. You have deployed a service in a user-defined project. You have created a ServiceMonitor custom resource definition (CRD) for the service to define how the service is monitored. Procedure From the Developer perspective in the OpenShift Container Platform web console, select Observe Metrics . Select the project that you want to view metrics for in the Project: list. Choose a query from the Select Query list, or run a custom PromQL query by selecting Show PromQL . Note In the Developer perspective, you can only run one query at a time. Additional resources See the Prometheus query documentation for more information about creating PromQL queries. Additional resources See the Querying metrics for user-defined projects as a developer for details on accessing non-cluster metrics as a developer or a privileged user 5.3.3. Exploring the visualized metrics After running the queries, the metrics are displayed on an interactive plot. The X-axis in the plot represents time and the Y-axis represents metrics values. Each metric is shown as a colored line on the graph. You can manipulate the plot interactively and explore the metrics. Procedure In the Administrator perspective: Initially, all metrics from all enabled queries are shown on the plot. You can select which metrics are shown. Note By default, the query table shows an expanded view that lists every metric and its current value. You can select ˅ to minimize the expanded view for a query. To hide all metrics from a query, click for the query and click Hide all series . To hide a specific metric, go to the query table and click the colored square near the metric name. To zoom into the plot and change the time range, do one of the following: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. To reset the time range, select Reset Zoom . To display outputs for all queries at a specific point in time, hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. To hide the plot, select Hide Graph . In the Developer perspective: To zoom into the plot and change the time range, do one of the following: Visually select the time range by clicking and dragging on the plot horizontally. Use the menu in the left upper corner to select the time range. To reset the time range, select Reset Zoom . To display outputs for all queries at a specific point in time, hold the mouse cursor on the plot at that point. The query outputs will appear in a pop-up box. Additional resources See the Querying metrics section on using the PromQL interface 5.4. steps Managing alerts | [
"curl http://<example_app_endpoint>/metrics",
"HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1",
"apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.1 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP",
"oc apply -f prometheus-example-app.yaml",
"oc -n ns1 get pod",
"NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-example-monitor name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc apply -f example-app-service-monitor.yaml",
"oc -n ns1 get servicemonitor",
"NAME AGE prometheus-example-monitor 81m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/monitoring/managing-metrics |
7.6. NotifyingFutures | 7.6. NotifyingFutures Methods in Red Hat JBoss Data Grid do not return Java Development Kit (JDK) Futures , but a sub-interface known as a NotifyingFuture . Unlike a JDK Future , a listener can be attached to a NotifyingFuture to notify the user about a completed future. Note NotifyingFutures are only available in JBoss Data Grid Library mode. Report a bug 7.6.1. NotifyingFutures Example The following is an example depicting how to use NotifyingFutures in Red Hat JBoss Data Grid: Example 7.6. Configuring NotifyingFutures Report a bug | [
"FutureListener futureListener = new FutureListener() { public void futureDone(Future future) { try { future.get(); } catch (Exception e) { // Future did not complete successfully System.out.println(\"Help!\"); } } }; cache.putAsync(\"key\", \"value\").attachListener(futureListener);"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-notifyingfutures |
Chapter 4. ComponentStatus [v1] | Chapter 4. ComponentStatus [v1] Description ComponentStatus (and ComponentStatusList) holds the cluster validation info. Deprecated: This API is deprecated in v1.19+ Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array List of component conditions observed conditions[] object Information about the condition of a component. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .conditions Description List of component conditions observed Type array 4.1.2. .conditions[] Description Information about the condition of a component. Type object Required type status Property Type Description error string Condition error code for a component. For example, a health check error code. message string Message about the condition for a component. For example, information about a health check. status string Status of the condition for a component. Valid values for "Healthy": "True", "False", or "Unknown". type string Type of condition for a component. Valid value: "Healthy" 4.2. API endpoints The following API endpoints are available: /api/v1/componentstatuses GET : list objects of kind ComponentStatus /api/v1/componentstatuses/{name} GET : read the specified ComponentStatus 4.2.1. /api/v1/componentstatuses HTTP method GET Description list objects of kind ComponentStatus Table 4.1. HTTP responses HTTP code Reponse body 200 - OK ComponentStatusList schema 401 - Unauthorized Empty 4.2.2. /api/v1/componentstatuses/{name} Table 4.2. Global path parameters Parameter Type Description name string name of the ComponentStatus HTTP method GET Description read the specified ComponentStatus Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ComponentStatus schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/metadata_apis/componentstatus-v1 |
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator | Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Container Platform version. Version OpenShift Container Platform version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Container Platform must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode . This installs the Operator in all namespaces. Ensure that the openshift-keda namespace is selected for Installed Namespace . OpenShift Container Platform creates the namespace, if not present in your cluster. Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following commands: USD oc get all -n openshift-keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the OpenShift Container Platform web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring" for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: "false" 9 unsafeSsl: "false" 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Container Platform monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring You can use the installed OpenShift Container Platform Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. For your scaled objects to be able to read the OpenShift Container Platform Prometheus metrics, you must use a trigger authentication or a cluster trigger authentication in order to provide the authentication information required. The following procedure differs depending on which trigger authentication method you use. For more information on trigger authentications, see "Understanding custom metrics autoscaler trigger authentications". Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account. Create a secret that generates a token for the service account. Create the trigger authentication. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites OpenShift Container Platform monitoring must be installed. Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the appropriate project: USD oc project <project_name> 1 1 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. Create a service account and token, if your cluster does not have one: Create a service account object by using the following command: USD oc create serviceaccount thanos 1 1 Specifies the name of the service account. Optional: Create a secret YAML to generate a service account token: Important If you disable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, the image pull secret is not generated for each service account. In this situation, you must perform this step. apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token 1 Specifies the name of the service account. Create the secret object by using the following command: USD oc create -f <file_name>.yaml Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount thanos 1 1 Specifies the name of the service account. Example output Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt 1 Specifies one of the following trigger authentication methods: If you are using a trigger authentication, specify TriggerAuthentication . This example configures a trigger authentication. If you are using a cluster trigger authentication, specify ClusterTriggerAuthentication . 2 Specifies that this object uses a secret for authorization. 3 Specifies the authentication parameter to supply by using the token. 4 Specifies the name of the token to use. 5 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5 1 Specifies one of the following object types: If you are using a trigger authentication, specify RoleBinding . If you are using a cluster trigger authentication, specify ClusterRoleBinding . 2 Specifies the name of the role you created. 3 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. 4 Specifies the name of the service account to bind to the role. 5 Specifies the project where you previously created the service account. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step Additional resources Understanding custom metrics autoscaler trigger authentications 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about OpenShift Container Platform secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n openshift-keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.27","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The openshift-keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard OpenShift Container Platform must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default must-gather image as an image stream by running the following command. USD oc import-image is/must-gather -n openshift Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler: └── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the OpenShift Container Platform web console. Procedure Select the Administrator perspective in the OpenShift Container Platform web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Container Platform monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Adding a custom metrics autoscaler to a job You can create a custom metrics autoscaler for any Job object. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Create a YAML file similar to the following: kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" pendingPodConditions: - "Ready" - "PodScheduled" - "AnyOtherCustomPodCondition" multipleScalersCalculation : "max" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "bearer" authenticationRef: 14 name: prom-cluster-triggerauthentication 1 Specifies the maximum duration the job can run. 2 Specifies the number of retries for a job. The default is 6 . 3 Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, the default is 1 . 4 Optional: Specifies how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, the default is 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter. 5 Specifies the template for the pod the controller creates. 6 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 7 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 8 Optional: Specifies the number of successful finished jobs should be kept. The default is 100 . 9 Optional: Specifies how many failed jobs should be kept. The default is 100 . 10 Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 11 Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated: default : The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs. gradual : The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs. 12 Optional: Specifies a scaling strategy: default , custom , or accurate . The default is default . For more information, see the link in the "Additional resources" section that follows. 13 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. 14 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledjob <scaled_job_name> Example output NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. 3.10.3. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your OpenShift Container Platform cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Container Platform can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your OpenShift Container Platform cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project openshift-keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda | [
"oc delete crd scaledobjects.keda.k8s.io",
"oc delete crd triggerauthentications.keda.k8s.io",
"oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem",
"oc get all -n openshift-keda",
"NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10",
"oc project <project_name> 1",
"oc create serviceaccount thanos 1",
"apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token",
"oc create -f <file_name>.yaml",
"oc describe serviceaccount thanos 1",
"Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>",
"apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch",
"oc create -f <file-name>.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5",
"oc create -f <file-name>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7",
"apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3",
"apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>",
"kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD",
"oc create -f <filename>.yaml",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2",
"oc apply -f <filename>",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"",
"oc edit ScaledObject scaledobject",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0",
"kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"",
"get pod -n openshift-keda",
"NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s",
"oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1",
"oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.27\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda",
"oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda",
"sh-4.4USD cd /var/audit-policy/",
"sh-4.4USD ls",
"log-2023.02.17-14:50 policy.yaml",
"sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1",
"sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"",
"oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}",
"└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal",
"Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>",
"apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"",
"apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication",
"oc create -f <filename>.yaml",
"oc get scaledobject <scaled_object_name>",
"NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s",
"kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication",
"oc create -f <filename>.yaml",
"oc get scaledjob <scaled_job_name>",
"NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s",
"oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh",
"oc get clusterrole | grep keda.sh",
"oc delete clusterrole.keda.sh-v1alpha1-admin",
"oc get clusterrolebinding | grep keda.sh",
"oc delete clusterrolebinding.keda.sh-v1alpha1-admin",
"oc delete project openshift-keda",
"oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator |
Chapter 22. General Updates | Chapter 22. General Updates Shortening of long network device names Some network devices have unacceptably long names. This is due to certain firmware reporting meaningless data, such as the device's onboard index value, which the kernel passes to user-space. Previously, this resulted in problems with maximum name length, especially with VLANs. With this update, systemd rejects unacceptably long names and falls back to a different naming scheme. As a result, long network device names will no longer appear. IMPORTANT: This also means that names on existing installations might change, and the affected network devices will not go online. The change in name will happen on network cards with names enoX where X is more than 16383. This will mostly affect vmware machines, because their firmware has the described problem. (BZ#1230210) A fix for systemd to read the device identification bytes correctly Due to an endianness problem, the version of systemd in Red Hat Enterprise Linux 7.2 read the device identification bytes in a wrong order, causing the dev/disk/by-id/wwn-* symbolic links to be generated incorrectly. A patch has been applied to put the device identification bytes in the correct order and the symbolic links are now generated correctly. Any reference that depends on the value obtained from /dev/disk/by-id/wwn-* needs to be modified to work correctly in Red Hat Enterprise Linux 7.3 and later. (BZ# 1308795 ) The value of net.unix.max_dgram_qlen increased to 512 Previously, the default value of the net.unix.max_dgram_qlen kernel option was 16. As a consequence, when the network traffic was too high, certain services could terminate unexpectedly. This update sets the value to 512, thus preventing this problem. Users need to reboot the machine to apply this change. (BZ# 1267707 ) Links to non-root file systems in /lib/ and /lib64/ are removed by ldconfig.service Red Hat Enterprise Linux 7.2 introduced ldconfig.service , which is run at an early stage of the boot process, before non-root file systems are mounted. Before this update, when ldconfig.service was run, links in the /lib/ and /lib64/ directories were removed if they pointed to file systems which were not yet mounted. In Red Hat Enterprise Linux 7.3, ldconfig.service has been removed, and the problem no longer occurs. (BZ#1301990) systemd no longer hangs when many processes terminate in a short interval Previously, an inefficient algorithm for reaping processes caused the systemd service to become unresponsive when a large number of processes terminated in a short interval. With this update, the algorithm has been improved, and systemd is now able to reap the processes more quickly, which prevents the described systemd hang from occurring. (BZ#1360160) gnome-dictionary multilib packages conflicts no longer occur When both the 32-bit and 64-bit packages of the gnome-dictionary multilib packages were installed, upgrading from Red Hat Enterprise Linux 7.2 to Red Hat Enterprise Linux 7.3 failed. To fix this problem, the 32-bit package has been removed from Red Hat Enterprise Linux 7.3. As a result, upgrading in this situation works as expected. (BZ#1360338) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_general_updates |
3.5. Configuring IP Networking with ifcfg Files | 3.5. Configuring IP Networking with ifcfg Files As a system administrator, you can configure a network interface manually, editing the ifcfg files. Interface configuration (ifcfg) files control the software interfaces for individual network devices. As the system boots, it uses these files to determine what interfaces to bring up and how to configure them. These files are usually named ifcfg- name , where the suffix name refers to the name of the device that the configuration file controls. By convention, the ifcfg file's suffix is the same as the string given by the DEVICE directive in the configuration file itself. Configuring an Interface with Static Network Settings Using ifcfg Files For example, to configure an interface with static network settings using ifcfg files, for an interface with the name enp1s0 , create a file with the name ifcfg-enp1s0 in the /etc/sysconfig/network-scripts/ directory, that contains: For IPv4 configuration DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=10.0.1.27 For IPv6 configuration DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001:db8::2/48 You do not need to specify the network or broadcast address as this is calculated automatically by ipcalc . For more IPv6 ifcfg configuration options, see nm-settings-ifcfg-rh (5) man page. Important In Red Hat Enterprise Linux 7, the naming convention for network interfaces has been changed, as explained in Chapter 11, Consistent Network Device Naming . Specifying the hardware or MAC address using HWADDR directive can influence the device naming procedure. Configuring an Interface with Dynamic Network Settings Using ifcfg Files To configure an interface named em1 with dynamic network settings using ifcfg files: Create a file with the name ifcfg-em1 in the /etc/sysconfig/network-scripts/ directory, that contains: DEVICE=em1 BOOTPROTO=dhcp ONBOOT=yes To configure an interface to send a different host name to the DHCP server, add the following line to the ifcfg file: DHCP_HOSTNAME= hostname To configure an interface to send a different fully qualified domain name (FQDN) to the DHCP server, add the following line to the ifcfg file: DHCP_FQDN= fully.qualified.domain.name Note Only one directive, either DHCP_HOSTNAME or DHCP_FQDN , should be used in a given ifcfg file. In case both DHCP_HOSTNAME and DHCP_FQDN are specified, only the latter is used. To configure an interface to use particular DNS servers, add the following lines to the ifcfg file: PEERDNS=no DNS1= ip-address DNS2= ip-address where ip-address is the address of a DNS server. This will cause the network service to update /etc/resolv.conf with the specified DNS servers specified. Only one DNS server address is necessary, the other is optional. To configure static routes in the ifcfg file, see Section 4.5, "Configuring Static Routes in ifcfg files" . By default, NetworkManager calls the DHCP client, dhclient , when a profile has been set to obtain addresses automatically by setting BOOTPROTO to dhcp in an interface configuration file. If DHCP is required, an instance of dhclient is started for every Internet protocol, IPv4 and IPv6 , on an interface. If NetworkManager is not running, or is not managing an interface, then the legacy network service will call instances of dhclient as required. For more details on dynamic IP addresses, see Section 1.2, "Comparing Static to Dynamic IP Addressing" . To apply the configuration: Reload the updated connection files: Re-activate the connection: 3.5.1. Managing System-wide and Private Connection Profiles with ifcfg Files The permissions correspond to the USERS directive in the ifcfg files. If the USERS directive is not present, the network profile will be available to all users. As an example, the following command in an ifcfg file will make the connection available only to the users listed: USERS="joe bob alice" Also, you can set the USERCTL directive to manage the device: If you set yes , non- root users are allowed to control this device. If you set no , non- root users are not allowed to control this device. | [
"DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes PREFIX=24 IPADDR=10.0.1.27",
"DEVICE=enp1s0 BOOTPROTO=none ONBOOT=yes IPV6INIT=yes IPV6ADDR=2001:db8::2/48",
"DEVICE=em1 BOOTPROTO=dhcp ONBOOT=yes",
"PEERDNS=no DNS1= ip-address DNS2= ip-address",
"nmcli connection reload",
"nmcli connection up connection_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_ifcg_files |
Chapter 18. Registering RHEL by using Subscription Manager | Chapter 18. Registering RHEL by using Subscription Manager Post-installation, you must register your system to get continuous updates. 18.1. Registering RHEL 9 using the installer GUI You can register a Red Hat Enterprise Linux 9 by using the RHEL installer GUI. Prerequisites You have a valid user account on the Red Hat Customer Portal. See the Create a Red Hat Login page . You have a valid Activation Key and Organization id. Procedure From the Installation Summary screen, under Software , click Connect to Red Hat . Authenticate your Red Hat account using the Account or Activation Key option. Optional: In the Set System Purpose field select the Role , SLA , and Usage attribute that you want to set from the drop-down menu. At this point, your Red Hat Enterprise Linux 9 system has been successfully registered. 18.2. Registration Assistant Registration Assistant is designed to help you choose the most suitable registration option for your Red Hat Enterprise Linux environment. Additional resources For assistance with using a username and password to register RHEL with the Subscription Manager client, see the RHEL registration assistant on the Customer Portal. For assistance with registering your RHEL system to Red Hat Insights, see the Insights registration assistant on the Hybrid Cloud Console. 18.3. Registering your system using the command line You can register your Red Hat Enterprise Linux 9 subscription by using the command line. For an improved and simplified experience registering your hosts to Red Hat, use remote host configuration (RHC). The RHC client registers your system to Red Hat making your system ready for Insights data collection and enabling direct issue remediation from Insights for Red Hat Enterprise Linux. For more information, see RHC registration . Prerequisites You have an active, non-evaluation Red Hat Enterprise Linux subscription. Your Red Hat subscription status is verified. You have not previously received a Red Hat Enterprise Linux 9 subscription. You have successfully installed Red Hat Enterprise Linux 9 and logged into the system as root. Procedure Open a terminal window as a root user. Register your Red Hat Enterprise Linux system by using the activation key: When the system is successfully registered, an output similar to the following is displayed: Additional resources Using an activation key to register a system with Red Hat Subscription Manager Getting Started with RHEL System Registration | [
"subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>",
"The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_over_the_network/registering-rhel-by-using-subscription-manager_rhel-installer |
7.67. grep | 7.67. grep 7.67.1. RHSA-2015:1447 - Low: grep security, bug fix, and enhancement update Updated grep packages that fix two security issues, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Low security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links in the References section. The grep utility searches through textual input for lines that contain a match to a specified pattern and then prints the matching lines. The GNU grep utilities include grep, egrep, and fgrep. Security Fixes CVE-2012-5667 An integer overflow flaw, leading to a heap-based buffer overflow, was found in the way grep parsed large lines of data. An attacker able to trick a user into running grep on a specially crafted data file could use this flaw to crash grep or, potentially, execute arbitrary code with the privileges of the user running grep. CVE-2015-1345 A heap-based buffer overflow flaw was found in the way grep processed certain pattern and text combinations. An attacker able to trick a user into running grep on specially crafted input could use this flaw to crash grep or, potentially, read from uninitialized memory. The grep packages have been upgraded to upstream version 2.20, which provides a number of bug fixes and enhancements over the version. Notably, the speed of various operations has been improved significantly. Now, the recursive grep utility uses the fts function of the gnulib library for directory traversal, so that it can handle much larger directories without reporting the "File name too long" error message, and it can operate faster when dealing with large directory hierarchies. (BZ#982215, BZ#1064668, BZ#1126757, BZ#1167766, BZ#1171806) Bug Fixes BZ# 799863 Prior to this update, the \w and \W symbols were inconsistently matched to the [:alnum:] character class. Consequently, regular expressions that used \w and \W in some cases had incorrect results. An upstream patch which fixes the matching problem has been applied, and \w is now matched to the [_[:alnum:]] character and \W to the [^_[:alnum:]] character consistently. BZ# 1103270 Previously, the "--fixed-regexp" command-line option was not included in the grep(1) manual page. Consequently, the manual page was inconsistent with the built-in help of the grep utility. To fix this bug, grep(1) has been updated to include a note informing the user that "--fixed-regexp" is an obsolete option. Now, the built-in help and manual page are consistent regarding the "--fixed-regexp" option. BZ# 1193030 Previously, the Perl Compatible Regular Expression (PCRE) library did not work correctly when matching non-UTF-8 text in UTF-8 mode. Consequently, an error message about invalid UTF-8 byte sequence characters was returned. To fix this bug, patches from upstream have been applied to the PCRE library and the grep utility. As a result, PCRE now skips non-UTF-8 characters as non-matching text without returning any error message. All grep users are advised to upgrade to these updated packages, which correct these issues and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-grep |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/making-open-source-more-inclusive |
Chapter 4. Network considerations | Chapter 4. Network considerations Review the strategies for redirecting your application network traffic after migration. 4.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 4.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 4.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default ingress controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default ingress controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 4.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP. | [
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migrating_from_version_3_to_4/planning-considerations-3-4 |
B.24. gdb | B.24. gdb B.24.1. RHBA-2011:0145 - gdb bug fix update Updated gdb packages that fix a bug are now available for Red Hat Enterprise Linux 6. The GNU debugger, gdb, allows the debugging of programs written in C, C++, and other languages by executing them in a controlled fashion and then printing out their data. Bug Fix BZ# 662218 After you issued the command 'info program', GDB could have terminated unexpectedly, because a change of the shared library list corrupted the data in the internal GDB structure 'bpstat'. With this update, the 'bpstat' structure contains after a change in the shared library list the correct data and the command 'info program' works as expected. All users of gdb are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/gdb |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 8.6 is distributed with the kernel version 4.18.0-372, which provides support for the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions, see Subscription Utilization on the Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/architectures |
Using alt-java | Using alt-java Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_alt-java/index |
13.2.21. Creating Domains: Kerberos Authentication | 13.2.21. Creating Domains: Kerberos Authentication Both LDAP and proxy identity providers can use a separate Kerberos domain to supply authentication. Configuring a Kerberos authentication provider requires the key distribution center (KDC) and the Kerberos domain. All of the principal names must be available in the specified identity provider; if they are not, SSSD constructs the principals using the format username@REALM . Note Kerberos can only provide authentication; it cannot provide an identity database. SSSD assumes that the Kerberos KDC is also a Kerberos kadmin server. However, production environments commonly have multiple, read-only replicas of the KDC and only a single kadmin server. Use the krb5_kpasswd option to specify where the password changing service is running or if it is running on a non-default port. If the krb5_kpasswd option is not defined, SSSD tries to use the Kerberos KDC to change the password. The basic Kerberos configuration options are listed in Table 13.10, "Kerberos Authentication Configuration Parameters" . The sssd-krb5(5) man page has more information about Kerberos configuration options. Example 13.13. Basic Kerberos Authentication Example 13.14. Setting Kerberos Ticket Renewal Options The Kerberos authentication provider, among other tasks, requests ticket granting tickets (TGT) for users and services. These tickets are used to generate other tickets dynamically for specific services, as accessed by the ticket principal (the user). The TGT initially granted to the user principal is valid only for the lifetime of the ticket (by default, whatever is configured in the configured KDC). After that, the ticket cannot be renewed or extended. However, not renewing tickets can cause problems with some services when they try to access a service in the middle of operations and their ticket has expired. Kerberos tickets are not renewable by default, but ticket renewal can be enabled using the krb5_renewable_lifetime and krb5_renew_interval parameters. The lifetime for a ticket is set in SSSD with the krb5_lifetime parameter. This specifies how long a single ticket is valid, and overrides any values in the KDC. Ticket renewal itself is enabled in the krb5_renewable_lifetime parameter, which sets the maximum lifetime of the ticket, counting all renewals. For example, the ticket lifetime is set at one hour and the renewable lifetime is set at 24 hours: This means that the ticket expires every hour and can be renewed continually up to one day. The lifetime and renewable lifetime values can be in seconds (s), minutes (m), hours (h), or days (d). The other option - which must also be set for ticket renewal - is the krb5_renew_interval parameter, which sets how frequently SSSD checks to see if the ticket needs to be renewed. At half of the ticket lifetime (whatever that setting is), the ticket is renewed automatically. (This value is always in seconds.) Note If the krb5_renewable_lifetime value is not set or the krb5_renew_interval parameter is not set or is set to zero (0), then ticket renewal is disabled. Both krb5_renewable_lifetime and krb5_renew_interval are required for ticket renewal to be enabled. Table 13.10. Kerberos Authentication Configuration Parameters Parameter Description chpass_provider Specifies which service to use for password change operations. This is assumed to be the same as the authentication provider. To use Kerberos, set this to krb5 . krb5_server Gives the primary Kerberos server, by IP address or host names, to which SSSD will connect. krb5_backup_server Gives a comma-separated list of IP addresses or host names of Kerberos servers to which SSSD will connect if the primary server is not available. The list is given in order of preference, so the first server in the list is tried first. After an hour, SSSD will attempt to reconnect to the primary service specified in the krb5_server parameter. When using service discovery for KDC or kpasswd servers, SSSD first searches for DNS entries that specify UDP as the connection protocol, and then falls back to TCP. krb5_realm Identifies the Kerberos realm served by the KDC. krb5_lifetime Requests a Kerberos ticket with the specified lifetime in seconds (s), minutes (m), hours (h) or days (d). krb5_renewable_lifetime Requests a renewable Kerberos ticket with a total lifetime that is specified in seconds (s), minutes (m), hours (h) or days (d). krb5_renew_interval Sets the time, in seconds, for SSSD to check if tickets should be renewed. Tickets are renewed automatically once they exceed half their lifetime. If this option is missing or set to zero, then automatic ticket renewal is disabled. krb5_store_password_if_offline Sets whether to store user passwords if the Kerberos authentication provider is offline, and then to use that cache to request tickets when the provider is back online. The default is false , which does not store passwords. krb5_kpasswd Lists alternate Kerberos kadmin servers to use if the change password service is not running on the KDC. krb5_ccname_template Gives the directory to use to store the user's credential cache. This can be templatized, and the following tokens are supported: %u , the user's login name %U , the user's login UID %p , the user's principal name %r , the realm name %h , the user's home directory %d , the value of the krb5ccache_dir parameter %P , the process ID of the SSSD client. %% , a literal percent sign (%) XXXXXX , a string at the end of the template which instructs SSSD to create a unique filename safely For example: krb5_ccachedir Specifies the directory to store credential caches. This can be templatized, using the same tokens as krb5_ccname_template , except for %d and %P . If %u , %U , %p , or %h are used, then SSSD creates a private directory for each user; otherwise, it creates a public directory. krb5_auth_timeout Gives the time, in seconds, before an online authentication or change password request is aborted. If possible, the authentication request is continued offline. The default is 15 seconds. | [
"A domain with identities provided by LDAP and authentication by Kerberos [domain/KRBDOMAIN] id_provider = ldap chpass_provider = krb5 ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap-tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt auth_provider = krb5 krb5_server = kdc.example.com krb5_backup_server = kerberos.example.com krb5_realm = EXAMPLE.COM krb5_kpasswd = kerberos.admin.example.com krb5_auth_timeout = 15",
"krb5_lifetime = 1h krb5_renewable_lifetime = 1d",
"krb5_lifetime = 1h krb5_renewable_lifetime = 1d krb5_renew_interval = 60s",
"krb5_ccname_template = FILE:%d/krb5cc_%U_XXXXXX"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Configuring_Domains-Setting_up_Kerberos_Authentication |
Using the automation calculator | Using the automation calculator Red Hat Ansible Automation Platform 2.4 Evaluate the cost savings associated with automated processes Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_the_automation_calculator/index |
Appendix H. Ceph scrubbing options | Appendix H. Ceph scrubbing options Ceph ensures data integrity by scrubbing placement groups. The following are the Ceph scrubbing options that you can adjust to increase or decrease scrubbing operations. You can set these configuration options with the ceph config set global CONFIGURATION_OPTION VALUE command. mds_max_scrub_ops_in_progress Description The maximum number of scrub operations performed in parallel. You can set this value with ceph config set mds_max_scrub_ops_in_progress VALUE command. Type integer Default 5 osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD Daemon. Type integer Default 1 osd_scrub_begin_hour Description The specific hour at which the scrubbing begins. Along with osd_scrub_end_hour , you can define a time window in which the scrubs can happen. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Type integer Default 0 Allowed range [0, 23] osd_scrub_end_hour Description The specific hour at which the scrubbing ends. Along with osd_scrub_begin_hour , you can define a time window, in which the scrubs can happen. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Type integer Default 0 Allowed range [0, 23] osd_scrub_begin_week_day Description The specific day on which the scrubbing begins. 0 = Sunday, 1 = Monday, etc. Along with "osd_scrub_end_week_day", you can define a time window in which scrubs can happen. Use osd_scrub_begin_week_day = 0 and osd_scrub_end_week_day = 0 to allow scrubbing for the entire week. Type integer Default 0 Allowed range [0, 6] osd_scrub_end_week_day Description This defines the day on which the scrubbing ends. 0 = Sunday, 1 = Monday, etc. Along with osd_scrub_begin_week_day , they define a time window, in which the scrubs can happen. Use osd_scrub_begin_week_day = 0 and osd_scrub_end_week_day = 0 to allow scrubbing for the entire week. Type integer Default 0 Allowed range [0, 6] osd_scrub_during_recovery Description Allow scrub during recovery. Setting this to false disables scheduling new scrub, and deep-scrub, while there is an active recovery. The already running scrubs continue which is useful to reduce load on busy storage clusters. Type boolean Default false osd_scrub_load_threshold Description The normalized maximum load. Scrubbing does not happen when the system load, as defined by getloadavg() / number of online CPUs, is higher than this defined number. Type float Default 0.5 osd_scrub_min_interval Description The minimal interval in seconds for scrubbing the Ceph OSD daemon when the Ceph storage Cluster load is low. Type float Default 1 day osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD daemon irrespective of cluster load. Type float Default 7 days osd_scrub_chunk_min Description The minimal number of object store chunks to scrub during a single operation. Ceph blocks writes to a single chunk during scrub. type integer Default 5 osd_scrub_chunk_max Description The maximum number of object store chunks to scrub during a single operation. type integer Default 25 osd_scrub_sleep Description Time to sleep before scrubbing the group of chunks. Increasing this value slows down the overall rate of scrubbing, so that client operations are less impacted. type float Default 0.0 osd_scrub_extended_sleep Description Duration to inject a delay during scrubbing out of scrubbing hours or seconds. type float Default 0.0 osd_scrub_backoff_ratio Description Backoff ratio for scheduling scrubs. This is the percentage of ticks that do NOT schedule scrubs, 66% means that 1 out of 3 ticks schedules scrubs. type float Default 0.66 osd_deep_scrub_interval Description The interval for deep scrubbing, fully reading all data. The osd_scrub_load_threshold does not affect this setting. type float Default 7 days osd_debug_deep_scrub_sleep Description Inject an expensive sleep during deep scrub IO to make it easier to induce preemption. type float Default 0 osd_scrub_interval_randomize_ratio Description Add a random delay to osd_scrub_min_interval when scheduling the scrub job for a placement group. The delay is a random value less than osd_scrub_min_interval * osd_scrub_interval_randomized_ratio . The default setting spreads scrubs throughout the allowed time window of [1, 1.5] * osd_scrub_min_interval . type float Default 0.5 osd_deep_scrub_stride Description Read size when doing a deep scrub. type size Default 512 KB osd_scrub_auto_repair_num_errors Description Auto repair does not occur if more than this many errors are found. type integer Default 5 osd_scrub_auto_repair Description Setting this to true enables automatic Placement Group (PG) repair when errors are found by scrubs or deep-scrubs. However, if more than osd_scrub_auto_repair_num_errors errors are found, a repair is NOT performed. type boolean Default false osd_scrub_max_preemptions Description Set the maximum number of times you need to preempt a deep scrub due to a client operation before blocking client IO to complete the scrub. type integer Default 5 osd_deep_scrub_keys Description Number of keys to read from an object at a time during deep scrub. type integer Default 1024 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/configuration_guide/ceph-scrubbing-options_conf |
Chapter 3. Using observability with Red Hat Insights | Chapter 3. Using observability with Red Hat Insights Red Hat Insights is integrated with Red Hat Advanced Cluster Management observability, and is enabled to help identify existing or potential problems in your clusters. Red Hat Insights helps you to identify, prioritize, and resolve stability, performance, network, and security risks. Red Hat OpenShift Container Platform offers cluster health monitoring through Red Hat OpenShift Cluster Manager. Red Hat OpenShift Cluster Manager collects anonymized, aggregated information about the health, usage, and size of the clusters. For more information, see Red Hat Insights product documentation . When you create or import an OpenShift cluster, anonymized data from your managed cluster is automatically sent to Red Hat. This information is used to create insights, which provide cluster health information. Red Hat Advanced Cluster Management administrator can use this health information to create alerts based on severity. Required access : Cluster administrator 3.1. Prerequisites Ensure that Red Hat Insights is enabled. For more information, see Modifying the global cluster pull secret to disable remote health reporting . Install OpenShift Container Platform version 4.0 or later. Hub cluster user, who is registered to Red Hat OpenShift Cluster Manager, must be able to manage all the Red Hat Advanced Cluster Management managed clusters in Red Hat OpenShift Cluster Manager. 3.2. Managing insight PolicyReports Red Hat Advanced Cluster Management for Kubernetes PolicyReports are violations that are generated by the insights-client . The PolicyReports are used to define and configure alerts that are sent to incident management systems. When there is a violation, alerts from a PolicyReport are sent to incident management system. 3.2.1. Searching for insight policy reports You can search for a specific insight PolicyReport that has a violation, across your managed clusters. Complete the following steps: Log in to your Red Hat Advanced Cluster Management hub cluster. Select Search from the navigation menu. Enter the following query: kind:PolicyReport . Note: The PolicyReport name matches the name of the cluster. You can specify your query with the insight policy violation and categories. When you select a PolicyReport name, you are redirected to the Details page of the associated cluster. The Insights sidebar is automatically displayed. If the search service is disabled and you want to search for an insight, run the following command from your hub cluster: 3.2.2. Viewing identified issues from the console You can view the identified issues on a specific cluster. Complete the following steps: Log in to your Red Hat Advanced Cluster Management cluster. Select Overview from the navigation menu. Check the Cluster issues summary card. Select a severity link to view the PolicyReports that are associated with that severity. Details of the cluster issues and the severities are displayed from the Search page. Policy reports that are associated with the severity and have one or more issues appear. Select a policy report to view cluster details from the Clusters page. The Status card displays information about Nodes , Applications , Policy violations , and Identified issues . Select the Number of identified issues to view details. The Identified issues card represents the information from Red Hat insights. The Identified issues status displays the number of issues by severity. The triage levels used for the issues are the following severity categories: Critical , Major , Low , and Warning . Alternatively, you can select Clusters from the navigation menu. Select a managed cluster from the table to view more details. From the Status card, view the number of identified issues. Select the number of potential issues to view the severity chart and recommended remediations for the issues from the Potential issue side panel. You can also use the search feature to search for recommended remediations. The remediation option displays the Description of the vulnerability, Category that vulnerability is associated with, and the Total risk . Click the link to the vulnerability to view steps on How to remediate and the Reason for the vulnerability. Note: When you resolve the issue, you receive the Red Hat Insights every 30 minutes, and Red Hat Insights is updated every two hours. Be sure to verify which component sent the alert message from the PolicyReport . Navigate to the Governance page and select a specific PolicyReport . Select the Status tab and click the View details link to view the PolicyReport YAML file. Locate the source parameter, which informs you of the component that sent the violation. The value options are grc and insights . 3.2.3. Viewing update risk predictions View the potential risks for updating your managed clusters. Complete the following steps: Log in to your managed cluster. Go to the Overview page. From the Powered by Insights section, you can view the percentage of clusters with predicted risks, which are listed by severity. Select the number for the severity to view the list of clusters from the Clusters page. Select the cluster that you want, then click the Actions drop-down button. Click Upgrade clusters to view the risk for the upgdate. From the Upgrade clusters modal, find the Upgrade risks column and click the link for the number of risks to view information in the Hybrid Cloud console. 3.3. Additional resources Learn how to create custom alert rules for the PolicyReports , see Configuring Alertmanager for more information. See Observability service . | [
"get policyreport --all-namespaces"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/observability/using-rh-insights |
3.4. Creating a system image with Image Builder in the command-line interface | 3.4. Creating a system image with Image Builder in the command-line interface This procedure describes how to edit an existing Image Builder blueprint in the command-line interface. Prerequisites You have a blueprint prepared for the image. Procedure 1. Start the compose: Replace BLUEPRINT-NAME with name of the blueprint, and IMAGE-TYPE with the type of image. For possible values, see output of the composer-cli compose types command. The compose process starts in the background and the UUID of the compose is shown. 2. Wait until the compose is finished. Please, notice that this may take several minutes. To check the status of the compose: A finished compose shows a status value FINISHED . Identify the compose in the list by its UUID. 3. Once the compose is finished, download the resulting image file: Replace UUID with the UUID value shown in the steps. Alternatively, you can access the image file directly under the path /var/lib/lorax/composer/results/UUID/ . You can also download the logs using the composer-cli compose logs UUID command, or the metadata using the composer-cli compose metadata UUID command. | [
"composer-cli compose start BLUEPRINT-NAME IMAGE-TYPE",
"composer-cli compose status",
"composer-cli compose image UUID"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-test_chapter3-test_section_4 |
Chapter 5. Customizing the Dashboard | Chapter 5. Customizing the Dashboard The OpenStack dashboard for Red Hat OpenStack Platform uses a default theme ( RCUE ), which is stored inside the horizon container. You can customize the look and feel of the OpenStack dashboard by adding your own theme to the container image and customizing certain dashboard parameters. With this customization, you can modify the following elements: Logo Site colors Stylesheets HTML title Site branding link Help URL Note To ensure continued support for modified Red Hat OpenStack Platform container images, the resulting images must comply with the Red Hat Container Support Policy . 5.1. Obtaining the horizon container image You must obtain a copy of the horizon container image. You can pull this image either into the undercloud or a separate client system running podman . To pull the horizon container image, run the following command: You can now use this image as a basis for a modified image. 5.2. Obtaining the RCUE theme The horizon container image is configured to use the Red Hat branded RCUE theme by default. You can use this theme as a basis for your own theme and extract a copy from the container image. Procedure Make a directory for your theme: Start a container that executes a null loop. For example, run the following command: Copy the RCUE theme from the container to your local directory: Kill the container: You should now have a local copy of the RCUE theme. 5.3. Creating your own theme based on RCUE To use RCUE as a basis, copy the entire RCUE theme directory rcue to a new location, for example mytheme : To change a theme's colors, graphics, fonts, among others, edit the files in mytheme . When editing this theme, check for all instances of rcue and ensure that you change them to the new mytheme name. This includes paths, files, and directories. 5.4. Creating a file to enable your theme and customize the dashboard To enable your theme in the dashboard container, you must create a file to override the AVAILABLE_THEMES parameter. Create a new file called _12_mytheme_theme.py in the horizon-themes directory and add the following content: The 12 in the file name ensures this file is loaded after the RCUE file, which uses 11 , and overrides the AVAILABLE_THEMES parameter. You can also set custom parameters in the _12_mytheme_theme.py file. For example: SITE_BRANDING Set the HTML title that appears at the top of the browser window. For example: SITE_BRANDING_LINK Changes the hyperlink of the theme's logo, which normally redirects to horizon:user_home by default. For example: 5.5. Generating a modified horizon image When your custom theme is ready, you can create a new container image that enables and uses your theme. Use a dockerfile to generate a new container image using the original horizon image as a basis. The following is an example of a dockerfile : Save this file in your horizon-themes directory as dockerfile . To use the dockerfile to generate the new image, run the following command: The -t option names and tags the resulting image. It uses the following syntax: LOCATION This is usually the location of the container registry that the overcloud eventually pulls uses to pull images. In this instance, you will push this image to the undercloud's container registry, so set this to the undercloud IP and port. NAME For consistency, this is usually the same name as the original container image followed by the name of your theme. In this case, it is rhosp-rhel8/openstack-horizon-mytheme . TAG The tag for the image. Red Hat uses the version and release labels as a basis for this tag and it is usually a good idea to follow this convention. If you generate a new version of this image, increment the release (e.g. 0-2 ). Push the resulting image to the undercloud's container registry: Important If updating or upgrading Red Hat OpenStack Platform, you must reapply the theme to the new horizon image and push a new version of the modified image to the undercloud. 5.6. Using the modified container image in the overcloud To use the resulting container image with your overcloud deployment, edit the environment file that contains the list of container image locations. This environment file is usually named overcloud-images.yaml . Edit the ContainerHorizonConfigImage and ContainerHorizonImage parameters to point to your modified container image. For example: Save this new version of the overcloud-images.yaml file. 5.7. Editing puppet parameters Red Hat OpenStack Platform director provides a set of horizon parameters you can modify using environment files. You can also use the ExtraConfig hook to set Puppet hieradata. For example, the default help URL points to https://access.redhat.com/documentation/en/red-hat-openstack-platform . You can modify this URL with the following environment file content: 5.8. Deploying an overcloud with a customized Dashboard To deploy the overcloud with your dashboard customizations, include the following environment files: The environment file with your modified container image locations. The environment file with additional dashboard modifications. Any other environment files relevant to the configuration of your overcloud. For example: | [
"sudo podman pull registry.redhat.io/rhosp-rhel8/openstack-horizon",
"mkdir ~/horizon-themes cd ~/horizon-themes",
"sudo podman run --rm -d --name horizon-temp registry.redhat.io/rhosp-rhel8/openstack-horizon /usr/bin/sleep infinity",
"sudo podman cp -a horizon-temp:/usr/share/openstack-dashboard/openstack_dashboard/themes/rcue .",
"sudo podman kill horizon-temp",
"cp -r rcue mytheme",
"AVAILABLE_THEMES = [('mytheme', 'My Custom Theme', 'themes/mytheme')]",
"SITE_BRANDING = \"Example, Inc. Cloud\"",
"SITE_BRANDING_LINK = \"http://example.com\"",
"FROM registry.redhat.io/rhosp-rhel8/openstack-horizon MAINTAINER Acme LABEL name=\"rhosp-rhel8/openstack-horizon-mytheme\" vendor=\"Acme\" version=\"0\" release=\"1\" COPY mytheme /usr/share/openstack-dashboard/openstack_dashboard/themes/mytheme COPY _12_mytheme_theme.py /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py RUN sudo chown horizon:horizon /etc/openstack-dashboard/local_settings.d/_12_mytheme_theme.py",
"sudo podman build . -t \"192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1\"",
"[LOCATION]/[NAME]:[TAG]",
"podman push 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1",
"parameter_defaults: ContainerHorizonConfigImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1 ContainerHorizonImage: 192.168.24.1:8787/rhosp-rhel8/openstack-horizon-mytheme:0-1",
"parameter_defaults: ExtraConfig: horizon::help_url: \"http://openstack.example.com\"",
"openstack overcloud deploy --templates -e /home/stack/templates/overcloud-images.yaml -e /home/stack/templates/help_url.yaml [OTHER OPTIONS]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/dashboard-customization |
16.4.3. Other Actions with guestfish | 16.4.3. Other Actions with guestfish You can also format file systems, create partitions, create and resize LVM logical volumes and much more, with commands such as mkfs , part-add , lvresize , lvcreate , vgcreate and pvcreate . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-other-actions-with-guestfish |
function::indent_depth | function::indent_depth Name function::indent_depth - returns the global nested-depth Synopsis Arguments delta the amount of depth added/removed for each call Description This function returns a number for appropriate indentation, similar to indent . Call it with a small positive or matching negative delta. Unlike the thread_indent_depth function, the indent does not track individual indent values on a per thread basis. | [
"indent_depth:long(delta:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-indent-depth |
Chapter 2. CephFS through NFS-Ganesha Installation | Chapter 2. CephFS through NFS-Ganesha Installation A typical Ceph file system (CephFS) through NFS installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following configurations: OpenStack Controller nodes running containerized Ceph metadata server (MDS), Ceph monitor (MON), manila, and NFS-Ganesha services. Some of these services can coexist on the same node or can have one or more dedicated nodes. Ceph storage cluster with containerized object storage daemons (OSDs) running on Ceph storage nodes. An isolated StorageNFS network that provides access from projects to the NFS-Ganesha services for NFS share provisioning. Important The Shared File Systems service (manila) with CephFS through NFS fully supports serving shares to Red Hat OpenShift Container Platform through Manila CSI. This solution is not intended for large scale deployments. For important recommendations, see https://access.redhat.com/articles/6667651 . The Shared File Systems service (manila) provides APIs that allow the projects to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , means that you can use the Shared File Systems service as a CephFS back end. RHOSP director configures the driver to deploy the NFS-Ganesha gateway so that the CephFS shares are presented through the NFS 4.1 protocol. Using RHOSP director to deploy the Shared File Systems service with a CephFS back end on the overcloud automatically creates the required storage network defined in the heat template. For more information about network planning, see Overcloud networks in the Director Installation and Usage guide. Although you can manually configure the Shared File Systems service by editing its node /etc/manila/manila.conf file, RHOSP director can override any settings in future overcloud updates. The recommended method for configuring a Shared File Systems back end is through director. Use RHOSP director to create an extra StorageNFS network for storage traffic. Note Adding CephFS through NFS to an externally deployed Ceph cluster, which was not configured by Red Hat OpenStack Platform (RHOSP) director, is supported. Currently, only one CephFS back end can be defined in director. For more information, see Integrate with an existing Ceph Storage cluster in the Integrating an Overcloud with an Existing Red Hat Ceph Storage Cluster guide. 2.1. CephFS through NFS-Ganesha installation requirements CephFS through NFS has been fully supported since Red Hat OpenStack Platform version (RHOSP) 13. The RHOSP Shared File Systems service with CephFS through NFS for RHOSP 16.0 and later is supported for use with Red Hat Ceph Storage version 4.1 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Prerequisites You install the Shared File Systems service on Controller nodes, as is the default behavior. You install the NFS-Ganesha gateway service on the Pacemaker cluster of the Controller node. You configure only a single instance of a CephFS back end to use the Shared File Systems service. You can use other non-CephFS back ends with the single CephFS back end. 2.2. File shares File shares are handled differently between the OpenStack Shared File Systems service (manila), Ceph File System (CephFS), and Ceph through NFS. The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage inherently allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect. With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to CephFS through NFS shares is provided by specifying the IP address of the client. With CephFS through NFS, file shares are provisioned and accessed through the NFS protocol. The NFS protocol also handles security. 2.3. Installing the ceph-ansible package Install the ceph-ansible package to be installed on an undercloud node to deploy containerized Ceph. Procedure Log in to an undercloud node as the stack user. Install the ceph-ansible package: 2.4. Generating the custom roles file For security, isolate NFS traffic to a separate network when using CephFS through NFS so that the Ceph NFS server is accessible only through the isolated network. Deployers can constrain the isolated network to a select group of projects in the cloud. Red Hat OpenStack director ships with support to deploy a dedicated StorageNFS network. To configure and use the StorageNFS network, a custom Controller role is required. Important It is possible to omit the creation of an isolated network for NFS traffic. However, Red Hat strongly discourages such setups for production deployments that have untrusted clients. When omitting the StorageNFS network, director can connect the Ceph NFS server on any shared non-isolated network, such as the external network. Shared non-isolated networks are typically routable to all user private networks in the cloud. When the NFS server is on such a network, you cannot control access to OpenStack Shared File Systems service (manila) shares through specific client IP access rules. Users would have to use the generic 0.0.0.0/0 IP to allow access to their shares. The shares are then mountable to anyone who discovers the export path. The ControllerStorageNFS custom role configures the isolated StorageNFS network. This role is similar to the default Controller.yaml role file with the addition of the StorageNFS network and the CephNfs service, indicated by the OS::TripleO::Services:CephNfs command. For more information about the openstack overcloud roles generate command, see Roles in the Advanced Overcloud Customization guide. The openstack overcloud roles generate command creates a custom roles_data.yaml file including the services specified after -o . In the following example, the roles_data.yaml file created has the services for ControllerStorageNfs , Compute , and CephStorage . Note If you have an existing roles_data.yaml file, modify it to add ControllerStorageNfs , Compute , and CephStorage services to the configuration file. For more information, see Roles in the Advanced Overcloud Customization guide. Procedure Log in to an undercloud node as the stack user, Use the openstack overcloud roles generate command to create the roles_data.yaml file: 2.5. Deploying the updated environment When you are ready to deploy your environment, use the openstack overcloud deploy command with the custom environments and roles required to run CephFS with NFS-Ganesha. The overcloud deploy command has the following options in addition to other required options. Action Option Additional information Add the extra StorageNFS network with network_data_ganesha.yaml . -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml The StorageNFS and network_data_ganesha.yaml file . You can omit this option if you do not want to isolate NFS traffic to a separate network. For more information, see Generating the custom roles file . Add the custom roles defined in the roles_data.yaml file from the section. -r /home/stack/roles_data.yaml You can omit this option if you do not want to isolate NFS traffic to a separate network. For more information, see Generating the custom roles file . Deploy the Ceph daemons with ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide. Deploy the Ceph metadata server with ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml Initiating Overcloud Deployment in the Deploying an Overcloud with Containerized Red Hat Ceph guide Deploy the Shared File Systems (manila) service with the CephFS through NFS back end. Configure NFS-Ganesha with director. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml The manila-cephfsganesha-config.yaml environment file The following example shows an openstack overcloud deploy command with options to deploy CephFS through NFS-Ganesha, Ceph cluster, Ceph MDS, and the isolated StorageNFS network: For more information about the openstack overcloud deploy command, see Deployment command in the Director Installation and Usage guide. 2.5.1. The StorageNFS and network_data_ganesha.yaml file Use composable networks to define custom networks and assign them to any role. Instead of using the standard network_data.yaml file, you can configure the StorageNFS composable network with the network_data_ganesha.yaml file. Both of these roles are available in the /usr/share/openstack-tripleo-heat-templates directory. IMPORTANT If you do not define the Storage NFS network, director defaults to the external network. Although the external network can be useful in test and prototype environments, security on the external network is not sufficient for production environments. For example, if you expose the NFS service on the external network, a denial of service (DoS) attack can disrupt controller API access to all cloud users, not only consumers of NFS shares. By contrast, when you deploy the NFS service on a dedicated Storage NFS network, potential DoS attacks can target only NFS shares in the cloud. In addition to potential security risks, when you deploy the NFS service on an external network, additional routing configurations are required for precise access control to shares. On the Storage NFS network, however, you can use the client IP address on the network to achieve precise access control. The network_data_ganesha.yaml file contains an additional section that defines the isolated StorageNFS network. Although the default settings work for most installations, you must edit the YAML file to add your network settings, including the VLAN ID, subnet, and other settings. For more information about composable networks, see Using Composable Networks in the Advanced Overcloud Customization guide. 2.5.2. The CephFS back-end environment file The integrated environment file for defining a CephFS back end, manila-cephfsganesha-config.yaml , is located in /usr/share/openstack-tripleo-heat-templates/environments/ . The manila-cephfsganesha-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service (manila). The back-end default settings work for most environments. The following example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. To override default values set in resource_registry , copy this manila-cephfsganesha-config.yaml environment file to your local environment file directory, /home/stack/templates/ , and edit the parameter settings as required by your environment. This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS back end. In this case, the default back-end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that director creates for the manila service to access the Ceph cluster. 4 ManilaCephFSCephFSEnableSnapshots controls snapshot activation. Snapshots are supported with Ceph Storage 4.1 and later but the value of this parameter defaults to false . Set the value to true to ensure that the driver reports the snapshot_support capability to the Shared File Systems scheduler. For more information about environment files, see Environment Files in the Director Installation and Usage guide. | [
"[stack@undercloud-0 ~]USD sudo dnf install -y ceph-ansible [stack@undercloud-0 ~]USD sudo dnf list ceph-ansible Installed Packages ceph-ansible.noarch 4.0.23-1.el8cp @rhelosp-ceph-4-tools",
"[stack@undercloud ~]USD cd /usr/share/openstack-tripleo-heat-templates/roles [stack@undercloud roles]USD diff Controller.yaml ControllerStorageNfs.yaml 16a17 > - StorageNFS 50a45 > - OS::TripleO::Services::CephNfs",
"[stack@undercloud ~]USD openstack overcloud roles generate --roles-path /usr/share/openstack-tripleo-heat-templates/roles -o /home/stack/roles_data.yaml ControllerStorageNfs Compute CephStorage",
"[stack@undercloud ~]USD openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -r /home/stack/roles_data.yaml -e /home/stack/containers-default-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/network-environment.yaml -e/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml",
"name: StorageNFS enabled: true vip: true name_lower: storage_nfs vlan: 70 ip_subnet: '172.17.0.0/20' allocation_pools: [{'start': '172.17.0.4', 'end': '172.17.0.250'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::4', 'end': 'fd00:fd00:fd00:7000::fffe'}]",
"[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml A Heat environment file which can be used to enable a a Manila CephFS-NFS driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml # ceph-nfs (ganesha) service is installed and configured by ceph-ansible # but it's still managed by pacemaker OS::TripleO::Services::CephNfs: ../deployment/ceph-ansible/ceph-nfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: true 4 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'NFS'"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/deploying_the_shared_file_systems_service_with_cephfs_through_nfs/assembly-cephfs-install_cephfs-nfs |
Chapter 5. Orchestration | Chapter 5. Orchestration The director uses Heat Orchestration Templates (HOT) as a template format for its Overcloud deployment plan. Templates in HOT format are usually expressed in YAML format. The purpose of a template is to define and create a stack , which is a collection of resources that Heat creates, and the configuration of the resources. Resources are objects in OpenStack and can include compute resources, network configuration, security groups, scaling rules, and custom resources. Note The Heat template file extension must be .yaml or .template , or it will not be treated as a custom template resource. This chapter provides some basics for understanding the HOT syntax so that you can create your own template files. 5.1. Learning Heat Template Basics 5.1.1. Understanding Heat Templates The structure of a Heat template has three main sections: Parameters These are settings passed to Heat, which provide a way to customize a stack, and any default values for parameters without passed values. These settings are defined in the parameters section of a template. Resources These are the specific objects to create and configure as part of a stack. OpenStack contains a set of core resources that span across all components. These are defined in the resources section of a template. Output These are values passed from Heat after the creation of the stack. You can access these values either through the Heat API or client tools. These are defined in the output section of a template. Here is an example of a basic Heat template: This template uses the resource type type: OS::Nova::Server to create an instance called my_instance with a particular flavor, image, and key. The stack can return the value of instance_name , which is called My Cirros Instance . Important A Heat template also requires the heat_template_version parameter, which defines the syntax version to use and the functions available. For more information, see the Official Heat Documentation . 5.1.2. Understanding Environment Files An environment file is a special type of template that provides customization for your Heat templates. This includes three key parts: Resource Registry This section defines custom resource names, linked to other Heat templates. This provides a method to create custom resources that do not exist within the core resource collection. These are defined in the resource_registry section of an environment file. Parameters These are common settings you apply to the top-level template's parameters. For example, if you have a template that deploys nested stacks, such as resource registry mappings, the parameters only apply to the top-level template and not templates for the nested resources. Parameters are defined in the parameters section of an environment file. Parameter Defaults These parameters modify the default values for parameters in all templates. For example, if you have a Heat template that deploys nested stacks, such as resource registry mappings,the parameter defaults apply to all templates. The parameter defaults are defined in the parameter_defaults section of an environment file. Important It is recommended to use parameter_defaults instead of parameters When creating custom environment files for your Overcloud. This is so the parameters apply to all stack templates for the Overcloud. An example of a basic environment file: For example, this environment file ( my_env.yaml ) might be included when creating a stack from a certain Heat template ( my_template.yaml ). The my_env.yaml files creates a new resource type called OS::Nova::Server::MyServer . The myserver.yaml file is a Heat template file that provides an implementation for this resource type that overrides any built-in ones. You can include the OS::Nova::Server::MyServer resource in your my_template.yaml file. The MyIP applies a parameter only to the main Heat template that deploys along with this environment file. In this example, it only applies to the parameters in my_template.yaml . The NetworkName applies to both the main Heat template (in this example, my_template.yaml ) and the templates associated with resources included the main template, such as the OS::Nova::Server::MyServer resource and its myserver.yaml template in this example. Note The environment file extension must be .yaml or .template , or it will not be treated as a custom template resource. 5.2. Obtaining the Default Director Templates The director uses an advanced Heat template collection used to create an Overcloud. This collection is available from the openstack group on Github in the openstack-tripleo-heat-templates repository. To obtain a clone of this template collection, run the following command: Note The Red Hat-specific version of this template collection is available from the openstack-tripleo-heat-template package, which installs the collection to /usr/share/openstack-tripleo-heat-templates . There are many Heat templates and environment files in this collection. However, the main files and directories to note in this template collection are: overcloud.j2.yaml This is the main template file used to create the Overcloud environment. This file uses Jinja2 syntax to iterate over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the Overcloud deployment process. overcloud-resource-registry-puppet.j2.yaml This is the main environment file used to create the Overcloud environment. It provides a set of configurations for Puppet modules stored on the Overcloud image. After the director writes the Overcloud image to each node, Heat starts the Puppet configuration for each node using the resources registered in this environment file. This file uses Jinja2 syntax to iterate over certain sections in the template to create custom roles. The Jinja2 formatting is rendered into YAML during the overcloud deployment process. roles_data.yaml A file that defines the roles in an overcloud and maps services to each role. network_data.yaml A file that defines the networks in an overcloud and their properties such as subnets, allocation pools, and VIP status. The default network_data file contains the default networks: External, Internal Api, Storage, Storage Management, Tenant, and Management. You can create a custom network_data file and add it to your openstack overcloud deploy command with the -n option. plan-environment.yaml A file that defines the metadata for your overcloud plan. This includes the plan name, main template to use, and environment files to apply to the overcloud. capabilities-map.yaml A mapping of environment files for an overcloud plan. Use this file to describe and enable environment files through the director's web UI. Custom environment files detected in the environments directory in an overcloud plan but not defined in the capabilities-map.yaml are listed in the Other subtab of 2 Specify Deployment Configuration > Overall Settings on the web UI. environments Contains additional Heat environment files that you can use with your Overcloud creation. These environment files enable extra functions for your resulting OpenStack environment. For example, the directory contains an environment file for enabling Cinder NetApp backend storage ( cinder-netapp-config.yaml ). Any environment files detected in this directory that are not defined in the capabilities-map.yaml file are listed in the Other subtab of 2 Specify Deployment Configuration > Overall Settings in the director's web UI. network A set of Heat templates to help create isolated networks and ports. puppet Templates mostly driven by configuration with puppet. The aforementioned overcloud-resource-registry-puppet.j2.yaml environment file uses the files in this directory to drive the application of the Puppet configuration on each node. puppet/services A directory containing Heat templates for all services in the composable service architecture. extraconfig Templates used to enable extra functionality. firstboot Provides example first_boot scripts that the director uses when initially creating the nodes. This provides a general overview of the templates the director uses for orchestrating the Overcloud creation. The few sections show how to create your own custom templates and environment files that you can add to an Overcloud deployment. 5.3. First Boot: Customizing First Boot Configuration The director provides a mechanism to perform configuration on all nodes upon the initial creation of the Overcloud. The director achieves this through cloud-init , which you can call using the OS::TripleO::NodeUserData resource type. In this example, update the nameserver with a custom IP address on all nodes. First, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a specific nameserver. You can use the OS::TripleO::MultipartMime resource type to send the configuration script. Create an environment file ( /home/stack/templates/firstboot.yaml ) that registers your Heat template as the OS::TripleO::NodeUserData resource type. To add the first boot configuration, add the environment file to the stack along with your other environment files when first creating the Overcloud. For example: The -e applies the environment file to the Overcloud stack. This adds the configuration to all nodes when they are first created and boot for the first time. Subsequent inclusions of these templates, such as updating the Overcloud stack, does not run these scripts. Important You can only register the OS::TripleO::NodeUserData to one Heat template. Subsequent usage overrides the Heat template to use. This achieves the following: OS::TripleO::NodeUserData is a director-based Heat resource used in other templates in the collection and applies first boot configuration to all nodes. This resource passes data for use in cloud-init . The default NodeUserData refers to a Heat template that produces a blank value ( firstboot/userdata_default.yaml ). In our case, our firstboot.yaml environment file replaces this default with a reference to our own nameserver.yaml file. nameserver_config defines our Bash script to run on first boot. The OS::Heat::SoftwareConfig resource defines it as a piece of configuration to apply. userdata converts the configuration from nameserver_config into a multi-part MIME message using the OS::Heat::MultipartMime resource. The outputs provides an output parameter OS::stack_id which takes the MIME message from userdata and provides it to the the Heat template/resource calling it. As a result, each node runs the following Bash script on its first boot: This example shows how Heat template pass and modfy configuration from one resource to another. It also shows how to use environment files to register new Heat resources or modify existing ones. 5.4. Pre-Configuration: Customizing Specific Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PreConfig resources to provide pre-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::*ExtraConfigPre hooks outlined below. The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a set of hooks to provide custom configuration for specific node roles after the first boot completes and before the core configuration begins. These hooks include: OS::TripleO::ControllerExtraConfigPre Additional configuration applied to Controller nodes before the core Puppet configuration. OS::TripleO::ComputeExtraConfigPre Additional configuration applied to Compute nodes before the core Puppet configuration. OS::TripleO::CephStorageExtraConfigPre Additional configuration applied to Ceph Storage nodes before the core Puppet configuration. OS::TripleO::ObjectStorageExtraConfigPre Additional configuration applied to Object Storage nodes before the core Puppet configuration. OS::TripleO::BlockStorageExtraConfigPre Additional configuration applied to Block Storage nodes before the core Puppet configuration. OS::TripleO::[ROLE]ExtraConfigPre Additional configuration applied to custom nodes before the core Puppet configuration. Replace [ROLE] with the composable role name. In this example, you first create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to write to a node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your Heat template to the role-based resource type. For example, to apply only to Controller nodes, use the ControllerExtraConfigPre hook: To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all Controller nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register each resource to only one Heat template per hook. Subsequent usage overrides the Heat template to use. This achieves the following: OS::TripleO::ControllerExtraConfigPre is a director-based Heat resource used in the configuration templates in the Heat template collection. This resource passes configuration to each Controller node. The default ControllerExtraConfigPre refers to a Heat template that produces a blank value ( puppet/extraconfig/pre_deploy/default.yaml ). In our case, our pre_config.yaml environment file replaces this default with a reference to our own nameserver.yaml file. The environment file also passes the nameserver_ip as a parameter_default value for our environment. This is a parameter that stores the IP address of our nameserver. The nameserver.yaml Heat template then accepts this parameter as defined in the parameters section. The template defines CustomExtraConfigPre as a configuration resource through OS::Heat::SoftwareConfig . Note the group: script property. The group defines the software configuration tool to use, which are available through a set of hooks for Heat. In this case, the script hook runs an executable script that you define in the SoftwareConfig resource as the config property. The script itself appends /etc/resolve.conf with the nameserver IP address. Note the str_replace attribute, which allows you to replace variables in the template section with parameters in the params section. In this case, we set the NAMESERVER_IP to the nameserver IP address, which substitutes the same variable in the script. This results in the following script: This example shows how to create a Heat template that defines a configuration and deploys it using the OS::Heat::SoftwareConfig and OS::Heat::SoftwareDeployments before the core configuration. It also shows how to define parameters in your environment file and pass them to templates in the configuration. 5.5. Pre-Configuration: Customizing All Overcloud Roles The Overcloud uses Puppet for the core configuration of OpenStack components. The director provides a hook to configure all node types after the first boot completes and before the core configuration begins: OS::TripleO::NodeExtraConfig Additional configuration applied to all nodes roles before the core Puppet configuration. In this example, create a basic Heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following parameters: CustomExtraConfigPre This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeploymentPre This executes a software configuration, which is the software configuration from the CustomExtraConfigPre resource. Note the following: The config parameter makes a reference to the CustomExtraConfigPre resource so Heat knows what configuration to apply. The server parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we only apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . The input_values parameter contains a sub-parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. , create an environment file ( /home/stack/templates/pre_config.yaml ) that registers your heat template as the OS::TripleO::NodeExtraConfig resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes before the core configuration begins on either the initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfig to only one Heat template. Subsequent usage overrides the Heat template to use. This achieves the following: OS::TripleO::NodeExtraConfig is a director-based Heat resource used in the configuration templates in the Heat template collection. This resource passes configuration to each node. The default NodeExtraConfig refers to a Heat template that produces a blank value ( puppet/extraconfig/pre_deploy/default.yaml ). In our case, our pre_config.yaml environment file replaces this default with a reference to our own nameserver.yaml file. The environment file also passes the nameserver_ip as a parameter_default value for our environment. This is a parameter that stores the IP address of our nameserver. The nameserver.yaml Heat template then accepts this parameter as defined in the parameters section. The template defines CustomExtraConfigPre as a configuration resource through OS::Heat::SoftwareConfig . Note the group: script property. The group defines the software configuration tool to use, which are available through a set of hooks for Heat. In this case, the script hook runs an executable script that you define in the SoftwareConfig resource as the config property. The script itself appends /etc/resolve.conf with the nameserver IP address. Note the str_replace attribute, which allows you to replace variables in the template section with parameters in the params section. In this case, we set the NAMESERVER_IP to the nameserver IP address, which substitutes the same variable in the script. This results in the following script: This example shows how to create a Heat template that defines a configuration and deploys it using the OS::Heat::SoftwareConfig and OS::Heat::SoftwareDeployments before the core configuration. It also shows how to define parameters in your environment file and pass them to templates in the configuration. 5.6. Post-Configuration: Customizing All Overcloud Roles Important versions of this document used the OS::TripleO::Tasks::*PostConfig resources to provide post-configuration hooks on a per role basis. The director's Heat template collection requires dedicated use of these hooks, which means you should not use them for custom use. Instead, use the OS::TripleO::NodeExtraConfigPost hook outlined below. A situation might occur where you have completed the creation of your Overcloud but want to add additional configuration to all roles, either on initial creation or on a subsequent update of the Overcloud. In this case, you use the following post-configuration hook: OS::TripleO::NodeExtraConfigPost Additional configuration applied to all nodes roles after the core Puppet configuration. In this example, you first create a basic heat template ( /home/stack/templates/nameserver.yaml ) that runs a script to append each node's resolv.conf with a variable nameserver. In this example, the resources section contains the following: CustomExtraConfig This defines a software configuration. In this example, we define a Bash script and Heat replaces _NAMESERVER_IP_ with the value stored in the nameserver_ip parameter. CustomExtraDeployments This executes a software configuration, which is the software configuration from the CustomExtraConfig resource. Note the following: The config parameter makes a reference to the CustomExtraConfig resource so Heat knows what configuration to apply. The servers parameter retrieves a map of the Overcloud nodes. This parameter is provided by the parent template and is mandatory in templates for this hook. The actions parameter defines when to apply the configuration. In this case, we apply the configuration when the Overcloud is created. Possible actions include CREATE , UPDATE , DELETE , SUSPEND , and RESUME . input_values contains a parameter called deploy_identifier , which stores the DeployIdentifier from the parent template. This parameter provides a timestamp to the resource for each deployment update. This ensures the resource reapplies on subsequent overcloud updates. Create an environment file ( /home/stack/templates/post_config.yaml ) that registers your Heat template as the OS::TripleO::NodeExtraConfigPost: resource type. To apply the configuration, add the environment file to the stack along with your other environment files when creating or updating the Overcloud. For example: This applies the configuration to all nodes after the core configuration completes on either initial Overcloud creation or subsequent updates. Important You can only register the OS::TripleO::NodeExtraConfigPost to only one Heat template. Subsequent usage overrides the Heat template to use. This achieves the following: OS::TripleO::NodeExtraConfigPost is a director-based Heat resource used in the post-configuration templates in the collection. This resource passes configuration to each node type through the *-post.yaml templates. The default NodeExtraConfigPost refers to a Heat template that produces a blank value ( extraconfig/post_deploy/default.yaml ). In our case, our post_config.yaml environment file replaces this default with a reference to our own nameserver.yaml file. The environment file also passes the nameserver_ip as a parameter_default value for our environment. This is a parameter that stores the IP address of our nameserver. The nameserver.yaml Heat template then accepts this parameter as defined in the parameters section. The template defines CustomExtraConfig as a configuration resource through OS::Heat::SoftwareConfig . Note the group: script property. The group defines the software configuration tool to use, which are available through a set of hooks for Heat. In this case, the script hook runs an executable script that your define in the SoftwareConfig resource as the config property. The script itself appends /etc/resolve.conf with the nameserver IP address. Note the str_replace attribute, which allows you to replace variables in the template section with parameters in the params section. In this case, we set the NAMESERVER_IP to the nameserver IP address, which substitutes the same variable in the script. This results in the following script: This example shows how to create a Heat template that defines a configuration and deploys it using the OS::Heat::SoftwareConfig and OS::Heat::SoftwareDeployments . It also shows how to define parameters in your environment file and pass them to templates in the configuration. 5.7. Puppet: Applying Custom Configuration to an Overcloud Previously, we discussed adding configuration for a new backend to OpenStack Puppet modules. This section show how the director executes the application of new configuration. Heat templates provide a hook allowing you to apply Puppet configuration with a OS::Heat::SoftwareConfig resource. The process is similar to how we include and execute Bash scripts. However, instead of the group: script hook, we use the group: puppet hook. For example, you might have a Puppet manifest ( example-puppet-manifest.pp ) that enables an NFS Cinder backend using the official Cinder Puppet Module: This Puppet configuration creates a new resource using the cinder::backend::nfs defined type. To apply this resource through Heat, create a basic Heat template ( puppet-config.yaml ) that runs our Puppet manifest: , create an environment file ( puppet_config.yaml ) that registers our Heat template as the OS::TripleO::NodeExtraConfigPost resource type. This example is similar to using SoftwareConfig and SoftwareDeployments from the script hook example in the section. However, there are some differences in this example: We set group: puppet so that we execute the puppet hook. The config attribute uses the get_file attribute to refer to a Puppet manifest that contains our additional configuration. The options attribute contains some options specific to Puppet configurations: The enable_hiera option enables the Puppet configuration to use Hiera data. The enable_facter option enables the Puppet configuration to use system facts from the facter command. This example shows how to include a Puppet manifest as part of the software configuration for the Overcloud. This provides a way to apply certain configuration classes from existing Puppet modules on the Overcloud images, which helps you customize your Overcloud to use certain software and hardware. 5.8. Puppet: Customizing Hieradata for Roles The Heat template collection contains a set of parameters to pass extra configuration to certain node types. These parameters save the configuration as hieradata for the node's Puppet configuration. These parameters are: ControllerExtraConfig Configuration to add to all Controller nodes. ComputeExtraConfig Configuration to add to all Compute nodes. BlockStorageExtraConfig Configuration to add to all Block Storage nodes. ObjectStorageExtraConfig Configuration to add to all Object Storage nodes. CephStorageExtraConfig Configuration to add to all Ceph Storage nodes. [ROLE]ExtraConfig Configuration to add to a composable role. Replace [ROLE] with the composable role name. ExtraConfig Configuration to add to all nodes. To add extra configuration to the post-deployment configuration process, create an environment file that contains these parameters in the parameter_defaults section. For example, to increase the reserved memory for Compute hosts to 1024 MB and set the VNC keymap to Japanese: Include this environment file when running openstack overcloud deploy . Important You can only define each parameter once. Subsequent usage overrides values. 5.9. Adding Environment Files to an Overcloud Deployment After developing a set of environment files relevant to your custom configuration, include these files in your Overcloud deployment. This means running the openstack overcloud deploy command with the -e option, followed by the environment file. You can specify the -e option as many times as necessary for your customization. For example: Important Environment files are stacked in consecutive order. This means that each subsequent file stacks upon both the main Heat template collection and all environment files. This provides a way to override resource definitions. For example, if all environment files in an Overcloud deployment define the NodeExtraConfigPost resource, then Heat uses NodeExtraConfigPost defined in the last environment file. As a result, the order of the environment files is important. Make sure to order your environment files so they are processed and stacked correctly. Warning Any environment files added to the Overcloud using the -e option become part of your Overcloud's stack definition. The director requires these environment files for any post-deployment or re-deployment functions. Failure to include these files can result in damage to your Overcloud. | [
"heat_template_version: 2013-05-23 description: > A very basic Heat template. parameters: key_name: type: string default: lars description: Name of an existing key pair to use for the instance flavor: type: string description: Instance type for the instance to be created default: m1.small image: type: string default: cirros description: ID or name of the image to use for the instance resources: my_instance: type: OS::Nova::Server properties: name: My Cirros Instance image: { get_param: image } flavor: { get_param: flavor } key_name: { get_param: key_name } output: instance_name: description: Get the instance's name value: { get_attr: [ my_instance, name ] }",
"resource_registry: OS::Nova::Server::MyServer: myserver.yaml parameter_defaults: NetworkName: my_network parameters: MyIP: 192.168.0.1",
"git clone https://github.com/openstack/tripleo-heat-templates.git",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration resources: userdata: type: OS::Heat::MultipartMime properties: parts: - config: {get_resource: nameserver_config} nameserver_config: type: OS::Heat::SoftwareConfig properties: config: | #!/bin/bash echo \"nameserver 192.168.1.1\" >> /etc/resolv.conf outputs: OS::stack_id: value: {get_resource: userdata}",
"resource_registry: OS::TripleO::NodeUserData: /home/stack/templates/nameserver.yaml",
"openstack overcloud deploy --templates -e /home/stack/templates/firstboot.yaml",
"#!/bin/bash echo \"nameserver 192.168.1.1\" >> /etc/resolve.conf",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: json nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" > /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}",
"resource_registry: OS::TripleO::ControllerExtraConfigPre: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml",
"#!/bin/sh echo \"nameserver 192.168.1.1\" >> /etc/resolve.conf",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: server: type: string nameserver_ip: type: string DeployIdentifier: type: string resources: CustomExtraConfigPre: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeploymentPre: type: OS::Heat::SoftwareDeployment properties: server: {get_param: server} config: {get_resource: CustomExtraConfigPre} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier} outputs: deploy_stdout: description: Deployment reference, used to trigger pre-deploy on changes value: {get_attr: [CustomExtraDeploymentPre, deploy_stdout]}",
"resource_registry: OS::TripleO::NodeExtraConfig: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/pre_config.yaml",
"#!/bin/sh echo \"nameserver 192.168.1.1\" >> /etc/resolve.conf",
"heat_template_version: 2014-10-16 description: > Extra hostname configuration parameters: servers: type: json nameserver_ip: type: string DeployIdentifier: type: string EndpointMap: default: {} type: json resources: CustomExtraConfig: type: OS::Heat::SoftwareConfig properties: group: script config: str_replace: template: | #!/bin/sh echo \"nameserver _NAMESERVER_IP_\" >> /etc/resolv.conf params: _NAMESERVER_IP_: {get_param: nameserver_ip} CustomExtraDeployments: type: OS::Heat::SoftwareDeploymentGroup properties: servers: {get_param: servers} config: {get_resource: CustomExtraConfig} actions: ['CREATE','UPDATE'] input_values: deploy_identifier: {get_param: DeployIdentifier}",
"resource_registry: OS::TripleO::NodeExtraConfigPost: /home/stack/templates/nameserver.yaml parameter_defaults: nameserver_ip: 192.168.1.1",
"openstack overcloud deploy --templates -e /home/stack/templates/post_config.yaml",
"#!/bin/sh echo \"nameserver 192.168.1.1\" >> /etc/resolve.conf",
"cinder::backend::nfs { 'mynfsserver': nfs_servers => ['192.168.1.200:/storage'], }",
"heat_template_version: 2014-10-16 parameters: servers: type: json resources: ExtraPuppetConfig: type: OS::Heat::SoftwareConfig properties: group: puppet config: get_file: example-puppet-manifest.pp options: enable_hiera: True enable_facter: False ExtraPuppetDeployment: type: OS::Heat::SoftwareDeployments properties: config: {get_resource: ExtraPuppetConfig} servers: {get_param: servers} actions: ['CREATE','UPDATE']",
"resource_registry: OS::TripleO::NodeExtraConfigPost: puppet_config.yaml",
"parameter_defaults: ComputeExtraConfig: nova::compute::reserved_host_memory: 1024 nova::compute::vnc_keymap: ja",
"openstack overcloud deploy --templates -e network-configuration.yaml -e storage-configuration.yaml -e first-boot.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/orchestration |
Chapter 1. Overview | Chapter 1. Overview This guide describes how to manage Instance High Availability (Instance HA) . Instance HA allows Red Hat OpenStack Platform to automatically evacuate and re-spawn instances on a different Compute node when their host Compute node fails. The evacuation process that is triggered by Instance HA is similar to what users can do manually, as described in Evacuate Instances . Instance HA works on shared storage or local storage environments, which means that evacuated instances maintain the same network configuration (static IP, floating IP, and so on) and the same characteristics inside the new host, even if they are spawned from scratch. Instance HA is managed by the following resource agents: Agent name Name inside cluster Role fence_compute fence-nova Marks a Compute node for evacuation when the node becomes unavailable. NovaEvacuate nova-evacuate Evacuates instances from failed nodes. This agent runs on one of the Controller nodes. Dummy compute-unfence-trigger Releases a fenced node and enables the node to run instances again. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_for_compute_instances/instanceha-overview |
Chapter 1. Network Observability Operator release notes | Chapter 1. Network Observability Operator release notes The Network Observability Operator enables administrators to observe and analyze network traffic flows for OpenShift Container Platform clusters. These release notes track the development of the Network Observability Operator in the OpenShift Container Platform. For an overview of the Network Observability Operator, see About Network Observability Operator . 1.1. Network Observability Operator 1.4.2 The following advisory is available for the Network Observability Operator 1.4.2: 2023:6787 Network Observability Operator 1.4.2 1.1.1. CVEs 2023-39325 2023-44487 1.2. Network Observability Operator 1.4.1 The following advisory is available for the Network Observability Operator 1.4.1: 2023:5974 Network Observability Operator 1.4.1 1.2.1. CVEs 2023-44487 2023-39325 2023-29406 2023-29409 2023-39322 2023-39318 2023-39319 2023-39321 1.2.2. Bug fixes In 1.4, there was a known issue when sending network flow data to Kafka. The Kafka message key was ignored, causing an error with connection tracking. Now the key is used for partitioning, so each flow from the same connection is sent to the same processor. ( NETOBSERV-926 ) In 1.4, the Inner flow direction was introduced to account for flows between pods running on the same node. Flows with the Inner direction were not taken into account in the generated Prometheus metrics derived from flows, resulting in under-evaluated bytes and packets rates. Now, derived metrics are including flows with the Inner direction, providing correct bytes and packets rates. ( NETOBSERV-1344 ) 1.3. Network Observability Operator 1.4.0 The following advisory is available for the Network Observability Operator 1.4.0: RHSA-2023:5379 Network Observability Operator 1.4.0 1.3.1. Channel removal You must switch your channel from v1.0.x to stable to receive the latest Operator updates. The v1.0.x channel is now removed. 1.3.2. New features and enhancements 1.3.2.1. Notable enhancements The 1.4 release of the Network Observability Operator adds improvements and new capabilities to the OpenShift Container Platform web console plugin and the Operator configuration. Web console enhancements: In the Query Options , the Duplicate flows checkbox is added to choose whether or not to show duplicated flows. You can now filter source and destination traffic with One-way , Back-and-forth , and Swap filters. The Network Observability metrics dashboards in Observe Dashboards NetObserv and NetObserv / Health are modified as follows: The NetObserv dashboard shows top bytes, packets sent, packets received per nodes, namespaces, and workloads. Flow graphs are removed from this dashboard. The NetObserv / Health dashboard shows flows overhead as well as top flow rates per nodes, namespaces, and workloads. Infrastructure and Application metrics are shown in a split-view for namespaces and workloads. For more information, see Network Observability metrics and Quick filters . Configuration enhancements: You now have the option to specify different namespaces for any configured ConfigMap or Secret reference, such as in certificates configuration. The spec.processor.clusterName parameter is added so that the name of the cluster appears in the flows data. This is useful in a multi-cluster context. When using OpenShift Container Platform, leave empty to make it automatically determined. For more information, see Flow Collector sample resource and Flow Collector API Reference . 1.3.2.2. Network Observability without Loki The Network Observability Operator is now functional and usable without Loki. If Loki is not installed, it can only export flows to KAFKA or IPFIX format and provide metrics in the Network Observability metrics dashboards. For more information, see Network Observability without Loki . 1.3.2.3. DNS tracking In 1.4, the Network Observability Operator makes use of eBPF tracepoint hooks to enable DNS tracking. You can monitor your network, conduct security analysis, and troubleshoot DNS issues in the Network Traffic and Overview pages in the web console. For more information, see Configuring DNS tracking and Working with DNS tracking . 1.3.2.4. SR-IOV support You can now collect traffic from a cluster with Single Root I/O Virtualization (SR-IOV) device. For more information, see Configuring the monitoring of SR-IOV interface traffic . 1.3.2.5. IPFIX exporter support You can now export eBPF-enriched network flows to the IPFIX collector. For more information, see Export enriched network flow data . 1.3.2.6. s390x architecture support Network Observability Operator can now run on s390x architecture. Previously it ran on amd64 , ppc64le , or arm64 . 1.3.3. Bug fixes Previously, the Prometheus metrics exported by Network Observability were computed out of potentially duplicated network flows. In the related dashboards, from Observe Dashboards , this could result in potentially doubled rates. Note that dashboards from the Network Traffic view were not affected. Now, network flows are filtered to eliminate duplicates prior to metrics calculation, which results in correct traffic rates displayed in the dashboards. ( NETOBSERV-1131 ) Previously, the Network Observability Operator agents were not able to capture traffic on network interfaces when configured with Multus or SR-IOV, non-default network namespaces. Now, all available network namespaces are recognized and used for capturing flows, allowing capturing traffic for SR-IOV. There are configurations needed for the FlowCollector and SRIOVnetwork custom resource to collect traffic. ( NETOBSERV-1283 ) Previously, in the Network Observability Operator details from Operators Installed Operators , the FlowCollector Status field might have reported incorrect information about the state of the deployment. The status field now shows the proper conditions with improved messages. The history of events is kept, ordered by event date. ( NETOBSERV-1224 ) Previously, during spikes of network traffic load, certain eBPF pods were OOM-killed and went into a CrashLoopBackOff state. Now, the eBPF agent memory footprint is improved, so pods are not OOM-killed and entering a CrashLoopBackOff state. ( NETOBSERV-975 ) Previously when processor.metrics.tls was set to PROVIDED the insecureSkipVerify option value was forced to be true . Now you can set insecureSkipVerify to true or false , and provide a CA certificate if needed. ( NETOBSERV-1087 ) 1.3.4. Known issues Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater. ( NETOBSERV-980 ) Currently, when spec.agent.ebpf.features includes DNSTracking, larger DNS packets require the eBPF agent to look for DNS header outside of the 1st socket buffer (SKB) segment. A new eBPF agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. ( NETOBSERV-1304 ) Currently, when spec.agent.ebpf.features includes DNSTracking, DNS over TCP packets requires the eBPF agent to look for DNS header outside of the 1st SKB segment. A new eBPF agent helper function needs to be implemented to support it. Currently, there is no workaround for this issue. ( NETOBSERV-1245 ) Currently, when using a KAFKA deployment model, if conversation tracking is configured, conversation events might be duplicated across Kafka consumers, resulting in inconsistent tracking of conversations, and incorrect volumetric data. For that reason, it is not recommended to configure conversation tracking when deploymentModel is set to KAFKA . ( NETOBSERV-926 ) Currently, when the processor.metrics.server.tls.type is configured to use a PROVIDED certificate, the operator enters an unsteady state that might affect its performance and resource consumption. It is recommended to not use a PROVIDED certificate until this issue is resolved, and instead using an auto-generated certificate, setting processor.metrics.server.tls.type to AUTO . ( NETOBSERV-1293 1.4. Network Observability Operator 1.3.0 The following advisory is available for the Network Observability Operator 1.3.0: RHSA-2023:3905 Network Observability Operator 1.3.0 1.4.1. Channel deprecation You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in the release. 1.4.2. New features and enhancements 1.4.2.1. Multi-tenancy in Network Observability System administrators can allow and restrict individual user access, or group access, to the flows stored in Loki. For more information, see Multi-tenancy in Network Observability . 1.4.2.2. Flow-based metrics dashboard This release adds a new dashboard, which provides an overview of the network flows in your OpenShift Container Platform cluster. For more information, see Network Observability metrics . 1.4.2.3. Troubleshooting with the must-gather tool Information about the Network Observability Operator can now be included in the must-gather data for troubleshooting. For more information, see Network Observability must-gather . 1.4.2.4. Multiple architectures now supported Network Observability Operator can now run on an amd64 , ppc64le , or arm64 architectures. Previously, it only ran on amd64 . 1.4.3. Deprecated features 1.4.3.1. Deprecated configuration parameter setting The release of Network Observability Operator 1.3 deprecates the spec.Loki.authToken HOST setting. When using the Loki Operator, you must now only use the FORWARD setting. 1.4.4. Bug fixes Previously, when the Operator was installed from the CLI, the Role and RoleBinding that are necessary for the Cluster Monitoring Operator to read the metrics were not installed as expected. The issue did not occur when the operator was installed from the web console. Now, either way of installing the Operator installs the required Role and RoleBinding . ( NETOBSERV-1003 ) Since version 1.2, the Network Observability Operator can raise alerts when a problem occurs with the flows collection. Previously, due to a bug, the related configuration to disable alerts, spec.processor.metrics.disableAlerts was not working as expected and sometimes ineffectual. Now, this configuration is fixed so that it is possible to disable the alerts. ( NETOBSERV-976 ) Previously, when Network Observability was configured with spec.loki.authToken set to DISABLED , only a kubeadmin cluster administrator was able to view network flows. Other types of cluster administrators received authorization failure. Now, any cluster administrator is able to view network flows. ( NETOBSERV-972 ) Previously, a bug prevented users from setting spec.consolePlugin.portNaming.enable to false . Now, this setting can be set to false to disable port-to-service name translation. ( NETOBSERV-971 ) Previously, the metrics exposed by the console plugin were not collected by the Cluster Monitoring Operator (Prometheus), due to an incorrect configuration. Now the configuration has been fixed so that the console plugin metrics are correctly collected and accessible from the OpenShift Container Platform web console. ( NETOBSERV-765 ) Previously, when processor.metrics.tls was set to AUTO in the FlowCollector , the flowlogs-pipeline servicemonitor did not adapt the appropriate TLS scheme, and metrics were not visible in the web console. Now the issue is fixed for AUTO mode. ( NETOBSERV-1070 ) Previously, certificate configuration, such as used for Kafka and Loki, did not allow specifying a namespace field, implying that the certificates had to be in the same namespace where Network Observability is deployed. Moreover, when using Kafka with TLS/mTLS, the user had to manually copy the certificate(s) to the privileged namespace where the eBPF agent pods are deployed and manually manage certificate updates, such as in the case of certificate rotation. Now, Network Observability setup is simplified by adding a namespace field for certificates in the FlowCollector resource. As a result, users can now install Loki or Kafka in different namespaces without needing to manually copy their certificates in the Network Observability namespace. The original certificates are watched so that the copies are automatically updated when needed. ( NETOBSERV-773 ) Previously, the SCTP, ICMPv4 and ICMPv6 protocols were not covered by the Network Observability agents, resulting in a less comprehensive network flows coverage. These protocols are now recognized to improve the flows coverage. ( NETOBSERV-934 ) 1.4.5. Known issues When processor.metrics.tls is set to PROVIDED in the FlowCollector , the flowlogs-pipeline servicemonitor is not adapted to the TLS scheme. ( NETOBSERV-1087 ) Since the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate change periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate change. This issue has only been observed in large-scale environments of 120 nodes or greater.( NETOBSERV-980 ) 1.5. Network Observability Operator 1.2.0 The following advisory is available for the Network Observability Operator 1.2.0: RHSA-2023:1817 Network Observability Operator 1.2.0 1.5.1. Preparing for the update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. Until the 1.2 release of the Network Observability Operator, the only channel available was v1.0.x . The 1.2 release of the Network Observability Operator introduces the stable update channel for tracking and receiving updates. You must switch your channel from v1.0.x to stable to receive future Operator updates. The v1.0.x channel is deprecated and planned for removal in a following release. 1.5.2. New features and enhancements 1.5.2.1. Histogram in Traffic Flows view You can now choose to show a histogram bar chart of flows over time. The histogram enables you to visualize the history of flows without hitting the Loki query limit. For more information, see Using the histogram . 1.5.2.2. Conversation tracking You can now query flows by Log Type , which enables grouping network flows that are part of the same conversation. For more information, see Working with conversations . 1.5.2.3. Network Observability health alerts The Network Observability Operator now creates automatic alerts if the flowlogs-pipeline is dropping flows because of errors at the write stage or if the Loki ingestion rate limit has been reached. For more information, see Viewing health information . 1.5.3. Bug fixes Previously, after changing the namespace value in the FlowCollector spec, eBPF agent pods running in the namespace were not appropriately deleted. Now, the pods running in the namespace are appropriately deleted. ( NETOBSERV-774 ) Previously, after changing the caCert.name value in the FlowCollector spec (such as in Loki section), FlowLogs-Pipeline pods and Console plug-in pods were not restarted, therefore they were unaware of the configuration change. Now, the pods are restarted, so they get the configuration change. ( NETOBSERV-772 ) Previously, network flows between pods running on different nodes were sometimes not correctly identified as being duplicates because they are captured by different network interfaces. This resulted in over-estimated metrics displayed in the console plug-in. Now, flows are correctly identified as duplicates, and the console plug-in displays accurate metrics. ( NETOBSERV-755 ) The "reporter" option in the console plug-in is used to filter flows based on the observation point of either source node or destination node. Previously, this option mixed the flows regardless of the node observation point. This was due to network flows being incorrectly reported as Ingress or Egress at the node level. Now, the network flow direction reporting is correct. The "reporter" option filters for source observation point, or destination observation point, as expected. ( NETOBSERV-696 ) Previously, for agents configured to send flows directly to the processor as gRPC+protobuf requests, the submitted payload could be too large and is rejected by the processors' GRPC server. This occurred under very-high-load scenarios and with only some configurations of the agent. The agent logged an error message, such as: grpc: received message larger than max . As a consequence, there was information loss about those flows. Now, the gRPC payload is split into several messages when the size exceeds a threshold. As a result, the server maintains connectivity. ( NETOBSERV-617 ) 1.5.4. Known issue In the 1.2.0 release of the Network Observability Operator, using Loki Operator 5.6, a Loki certificate transition periodically affects the flowlogs-pipeline pods and results in dropped flows rather than flows written to Loki. The problem self-corrects after some time, but it still causes temporary flow data loss during the Loki certificate transition. ( NETOBSERV-980 ) 1.5.5. Notable technical changes Previously, you could install the Network Observability Operator using a custom namespace. This release introduces the conversion webhook which changes the ClusterServiceVersion . Because of this change, all the available namespaces are no longer listed. Additionally, to enable Operator metrics collection, namespaces that are shared with other Operators, like the openshift-operators namespace, cannot be used. Now, the Operator must be installed in the openshift-netobserv-operator namespace. You cannot automatically upgrade to the new Operator version if you previously installed the Network Observability Operator using a custom namespace. If you previously installed the Operator using a custom namespace, you must delete the instance of the Operator that was installed and re-install your operator in the openshift-netobserv-operator namespace. It is important to note that custom namespaces, such as the commonly used netobserv namespace, are still possible for the FlowCollector , Loki, Kafka, and other plug-ins. ( NETOBSERV-907 )( NETOBSERV-956 ) 1.6. Network Observability Operator 1.1.0 The following advisory is available for the Network Observability Operator 1.1.0: RHSA-2023:0786 Network Observability Operator Security Advisory Update The Network Observability Operator is now stable and the release channel is upgraded to v1.1.0 . 1.6.1. Bug fix Previously, unless the Loki authToken configuration was set to FORWARD mode, authentication was no longer enforced, allowing any user who could connect to the OpenShift Container Platform console in an OpenShift Container Platform cluster to retrieve flows without authentication. Now, regardless of the Loki authToken mode, only cluster administrators can retrieve flows. ( BZ#2169468 ) | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/network_observability/network-observability-operator-release-notes |
8.150. perl-Test-MockObject | 8.150. perl-Test-MockObject 8.150.1. RHBA-2013:0836 - perl-Test-MockObject bug fix update Updated perl-Test-MockObject packages that fix one bug are now available for Red Hat Enterprise Linux 6. Test::MockObject is a highly polymorphic testing object, capable of looking like all sorts of objects. This makes white-box testing much easier, as you can concentrate on what the code being tested sends to and receives from the mocked object, instead of worrying about making up your own data. Bug Fix BZ# 661804 Building a perl-Test-MockObject source RPM package without an installed perl-CGI package failed on test execution. To fix this bug, build-time dependencies on the CGI, Test::Builder, and Test::More Perl modules have been declared in the RPM package. As a result, it is possible to rebuild the perl-Test-MockObject source RPM package in a minimal environment. Users of perl-Test-MockObject are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/perl-test-mockobject |
Chapter 174. Jetty 9 Component | Chapter 174. Jetty 9 Component Available as of Camel version 1.2 Warning The producer is deprecated - do not use. We only recommend using jetty as consumer (eg from jetty) The jetty component provides HTTP-based endpoints for consuming and producing HTTP requests. That is, the Jetty component behaves as a simple Web server. Jetty can also be used as a http client which mean you can also use it with Camel as a producer. Stream The assert call appears in this example, because the code is part of an unit test.Jetty is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the Exchange.HTTP_RESPONSE_CODE data multiple times (e.g.: doing multicasting, or redelivery error handling), you should use Stream caching or convert the message body to a String which is safe to be re-read multiple times. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jetty</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 174.1. URI format jetty:http://hostname[:port][/resourceUri][?options] You can append query options to the URI in the following format, ?option=value&option=value&... 174.2. Options The Jetty 9 component supports 33 options, which are listed below. Name Description Default Type sslKeyPassword (security) The key password, which is used to access the certificate's key entry in the keystore (this is the same password that is supplied to the keystore command's -keypass option). String sslPassword (security) The ssl password, which is required to access the keystore file (this is the same password that is supplied to the keystore command's -storepass option). String keystore (security) Specifies the location of the Java keystore file, which contains the Jetty server's own X.509 certificate in a key entry. String errorHandler (advanced) This option is used to set the ErrorHandler that Jetty server uses. ErrorHandler sslSocketConnectors (security) A map which contains per port number specific SSL connectors. Map socketConnectors (security) A map which contains per port number specific HTTP connectors. Uses the same principle as sslSocketConnectors. Map httpClientMinThreads (producer) To set a value for minimum number of threads in HttpClient thread pool. Notice that both a min and max size must be configured. Integer httpClientMaxThreads (producer) To set a value for maximum number of threads in HttpClient thread pool. Notice that both a min and max size must be configured. Integer minThreads (consumer) To set a value for minimum number of threads in server thread pool. Notice that both a min and max size must be configured. Integer maxThreads (consumer) To set a value for maximum number of threads in server thread pool. Notice that both a min and max size must be configured. Integer threadPool (consumer) To use a custom thread pool for the server. This option should only be used in special circumstances. ThreadPool enableJmx (common) If this option is true, Jetty JMX support will be enabled for this endpoint. false boolean jettyHttpBinding (advanced) To use a custom org.apache.camel.component.jetty.JettyHttpBinding, which are used to customize how a response should be written for the producer. JettyHttpBinding httpBinding (advanced) Not to be used - use JettyHttpBinding instead. HttpBinding httpConfiguration (advanced) Jetty component does not use HttpConfiguration. HttpConfiguration mbContainer (advanced) To use a existing configured org.eclipse.jetty.jmx.MBeanContainer if JMX is enabled that Jetty uses for registering mbeans. MBeanContainer sslSocketConnector Properties (security) A map which contains general SSL connector properties. Map socketConnector Properties (security) A map which contains general HTTP connector properties. Uses the same principle as sslSocketConnectorProperties. Map continuationTimeout (consumer) Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine. 30000 Long useContinuation (consumer) Whether or not to use Jetty continuations for the Jetty Server. true boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters false boolean responseBufferSize (common) Allows to configure a custom value of the response buffer size on the Jetty connectors. Integer requestBufferSize (common) Allows to configure a custom value of the request buffer size on the Jetty connectors. Integer requestHeaderSize (common) Allows to configure a custom value of the request header size on the Jetty connectors. Integer responseHeaderSize (common) Allows to configure a custom value of the response header size on the Jetty connectors. Integer proxyHost (proxy) To use a http proxy to configure the hostname. String proxyPort (proxy) To use a http proxy to configure the port number. Integer useXForwardedFor Header (common) To use the X-Forwarded-For header in HttpServletRequest.getRemoteAddr. false boolean sendServerVersion (consumer) If the option is true, jetty server will send the date header to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected. true boolean allowJavaSerialized Object (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Jetty 9 endpoint is configured using URI syntax: with the following path and query parameters: 174.2.1. Path Parameters (1 parameters): Name Description Default Type httpUri Required The url of the HTTP endpoint to call. URI 174.2.2. Query Parameters (54 parameters): Name Description Default Type chunked (common) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response true boolean disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http/http4 producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean enableMultipartFilter (common) Whether Jetty org.eclipse.jetty.servlets.MultiPartFilter is enabled or not. You should set this value to false when bridging endpoints, to ensure multipart requests is proxied/bridged as well. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy transferException (common) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean httpBinding (common) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding async (consumer) Configure the consumer to work in async mode false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean continuationTimeout (consumer) Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine. 30000 Long enableCORS (consumer) If the option is true, Jetty server will setup the CrossOriginFilter which supports the CORS out of box. false boolean enableJmx (consumer) If this option is true, Jetty JMX support will be enabled for this endpoint. See Jetty JMX support for more details. false boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean responseBufferSize (consumer) To use a custom buffer size on the javax.servlet.ServletResponse. Integer sendDateHeader (consumer) If the option is true, jetty server will send the date header to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected. false boolean sendServerVersion (consumer) If the option is true, jetty will send the server header with the jetty version information to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected. true boolean sessionSupport (consumer) Specifies whether to enable the session manager on the server side of Jetty. false boolean useContinuation (consumer) Whether or not to use Jetty continuations for the Jetty Server. Boolean eagerCheckContentAvailable (consumer) Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern filterInitParameters (consumer) Configuration of the filter init parameters. These parameters will be applied to the filter list before starting the jetty server. Map filtersRef (consumer) Allows using a custom filters which is putted into a list and can be find in the Registry. Multiple values can be separated by comma. String handlers (consumer) Specifies a comma-delimited set of Handler instances to lookup in your Registry. These handlers are added to the Jetty servlet context (for example, to add security). Important: You can not use different handlers with different Jetty endpoints using the same port number. The handlers is associated to the port number. If you need different handlers, then use different port numbers. String httpBindingRef (consumer) Deprecated Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. String multipartFilter (consumer) Allows using a custom multipart filter. Note: setting multipartFilterRef forces the value of enableMultipartFilter to true. Filter multipartFilterRef (consumer) Deprecated Allows using a custom multipart filter. Note: setting multipartFilterRef forces the value of enableMultipartFilter to true. String optionsEnabled (consumer) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean traceEnabled (consumer) Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. false boolean bridgeEndpoint (producer) If the option is true, HttpProducer will ignore the Exchange.HTTP_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back. false boolean connectionClose (producer) Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false. false boolean cookieHandler (producer) Configure a cookie handler to maintain a HTTP session CookieHandler copyHeaders (producer) If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). true boolean httpClientMaxThreads (producer) To set a value for maximum number of threads in HttpClient thread pool. This setting override any setting configured on component level. Notice that both a min and max size must be configured. If not set it default to max 254 threads used in Jettys thread pool. 254 Integer httpClientMinThreads (producer) To set a value for minimum number of threads in HttpClient thread pool. This setting override any setting configured on component level. Notice that both a min and max size must be configured. If not set it default to min 8 threads used in Jettys thread pool. 8 Integer httpMethod (producer) Configure the HTTP method to use. The HttpMethod header cannot override this option if set. HttpMethods ignoreResponseBody (producer) If this option is true, The http producer won't read response body and cache the input stream false boolean preserveHostHeader (producer) If the option is true, HttpProducer will set the Host header to the value contained in the current exchange Host header, useful in reverse proxy applications where you want the Host header received by the downstream server to reflect the URL called by the upstream client, this allows applications which use the Host header to generate accurate URL's for a proxied service false boolean throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean httpClient (producer) Sets a shared HttpClient to use for all producers created by this endpoint. By default each producer will use a new http client, and not share. Important: Make sure to handle the lifecycle of the shared client, such as stopping the client, when it is no longer in use. Camel will call the start method on the client to ensure its started when this endpoint creates a producer. This options should only be used in special circumstances. HttpClient httpClientParameters (producer) Configuration of Jetty's HttpClient. For example, setting httpClient.idleTimeout=30000 sets the idle timeout to 30 seconds. And httpClient.timeout=30000 sets the request timeout to 30 seconds, in case you want to timeout sooner if you have long running request/response calls. Map jettyBinding (producer) To use a custom JettyHttpBinding which be used to customize how a response should be written for the producer. JettyHttpBinding jettyBindingRef (producer) Deprecated To use a custom JettyHttpBinding which be used to customize how a response should be written for the producer. String okStatusCodeRange (producer) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. 200-299 String urlRewrite (producer) Deprecated Refers to a custom org.apache.camel.component.http.UrlRewrite which allows you to rewrite urls when you bridge/proxy endpoints. See more details at http://camel.apache.org/urlrewrite.html UrlRewrite mapHttpMessageBody (advanced) If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. true boolean mapHttpMessageFormUrl EncodedBody (advanced) If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. true boolean mapHttpMessageHeaders (advanced) If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean proxyAuthScheme (proxy) Proxy authentication scheme to use String proxyHost (proxy) Proxy hostname to use String proxyPort (proxy) Proxy port to use int authHost (security) Authentication host to use with NTML String sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters 174.3. Spring Boot Auto-Configuration The component supports 34 options, which are listed below. Name Description Default Type camel.component.jetty.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.jetty.continuation-timeout Allows to set a timeout in millis when using Jetty as consumer (server). By default Jetty uses 30000. You can use a value of = 0 to never expire. If a timeout occurs then the request will be expired and Jetty will return back a http error 503 to the client. This option is only in use when using Jetty with the Asynchronous Routing Engine. 30000 Long camel.component.jetty.enable-jmx If this option is true, Jetty JMX support will be enabled for this endpoint. false Boolean camel.component.jetty.enabled Enable jetty component true Boolean camel.component.jetty.error-handler This option is used to set the ErrorHandler that Jetty server uses. The option is a org.eclipse.jetty.server.handler.ErrorHandler type. String camel.component.jetty.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.jetty.http-binding Not to be used - use JettyHttpBinding instead. The option is a org.apache.camel.http.common.HttpBinding type. String camel.component.jetty.http-client-max-threads To set a value for maximum number of threads in HttpClient thread pool. Notice that both a min and max size must be configured. Integer camel.component.jetty.http-client-min-threads To set a value for minimum number of threads in HttpClient thread pool. Notice that both a min and max size must be configured. Integer camel.component.jetty.http-configuration Jetty component does not use HttpConfiguration. The option is a org.apache.camel.http.common.HttpConfiguration type. String camel.component.jetty.jetty-http-binding To use a custom org.apache.camel.component.jetty.JettyHttpBinding, which are used to customize how a response should be written for the producer. The option is a org.apache.camel.component.jetty.JettyHttpBinding type. String camel.component.jetty.keystore Specifies the location of the Java keystore file, which contains the Jetty server's own X.509 certificate in a key entry. String camel.component.jetty.max-threads To set a value for maximum number of threads in server thread pool. Notice that both a min and max size must be configured. Integer camel.component.jetty.mb-container To use a existing configured org.eclipse.jetty.jmx.MBeanContainer if JMX is enabled that Jetty uses for registering mbeans. The option is a org.eclipse.jetty.jmx.MBeanContainer type. String camel.component.jetty.min-threads To set a value for minimum number of threads in server thread pool. Notice that both a min and max size must be configured. Integer camel.component.jetty.proxy-host To use a http proxy to configure the hostname. String camel.component.jetty.proxy-port To use a http proxy to configure the port number. Integer camel.component.jetty.request-buffer-size Allows to configure a custom value of the request buffer size on the Jetty connectors. Integer camel.component.jetty.request-header-size Allows to configure a custom value of the request header size on the Jetty connectors. Integer camel.component.jetty.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.jetty.response-buffer-size Allows to configure a custom value of the response buffer size on the Jetty connectors. Integer camel.component.jetty.response-header-size Allows to configure a custom value of the response header size on the Jetty connectors. Integer camel.component.jetty.send-server-version If the option is true, jetty server will send the date header to the client which sends the request. NOTE please make sure there is no any other camel-jetty endpoint is share the same port, otherwise this option may not work as expected. true Boolean camel.component.jetty.socket-connector-properties A map which contains general HTTP connector properties. Uses the same principle as sslSocketConnectorProperties. The option is a java.util.Map<java.lang.String,java.lang.Object> type. String camel.component.jetty.socket-connectors A map which contains per port number specific HTTP connectors. Uses the same principle as sslSocketConnectors. The option is a java.util.Map<java.lang.Integer,org.eclipse.jetty.server.Connector> type. String camel.component.jetty.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.jetty.ssl-key-password The key password, which is used to access the certificate's key entry in the keystore (this is the same password that is supplied to the keystore command's -keypass option). String camel.component.jetty.ssl-password The ssl password, which is required to access the keystore file (this is the same password that is supplied to the keystore command's -storepass option). String camel.component.jetty.ssl-socket-connector-properties A map which contains general SSL connector properties. The option is a java.util.Map<java.lang.String,java.lang.Object> type. String camel.component.jetty.ssl-socket-connectors A map which contains per port number specific SSL connectors. The option is a java.util.Map<java.lang.Integer,org.eclipse.jetty.server.Connector> type. String camel.component.jetty.thread-pool To use a custom thread pool for the server. This option should only be used in special circumstances. The option is a org.eclipse.jetty.util.thread.ThreadPool type. String camel.component.jetty.use-continuation Whether or not to use Jetty continuations for the Jetty Server. true Boolean camel.component.jetty.use-global-ssl-context-parameters Enable usage of global SSL context parameters false Boolean camel.component.jetty.use-x-forwarded-for-header To use the X-Forwarded-For header in HttpServletRequest.getRemoteAddr. false Boolean 174.4. Message Headers Camel uses the same message headers as the HTTP component. From Camel 2.2, it also uses (Exchange.HTTP_CHUNKED,CamelHttpChunked) header to turn on or turn off the chuched encoding on the camel-jetty consumer. Camel also populates all request.parameter and request.headers. For example, given a client request with the URL, http://myserver/myserver?orderid=123 , the exchange will contain a header named orderid with the value 123. Starting with Camel 2.2.0, you can get the request.parameter from the message header not only from Get Method, but also other HTTP method. 174.5. Usage The Jetty component supports both consumer and producer endpoints. Another option for producing to other HTTP endpoints, is to use the HTTP Component 174.6. Producer Example Warning The producer is deprecated - do not use. We only recommend using jetty as consumer (eg from jetty) The following is a basic example of how to send an HTTP request to an existing HTTP endpoint. in Java DSL from("direct:start") .to("jetty://http://www.google.com"); or in Spring XML <route> <from uri="direct:start"/> <to uri="jetty://http://www.google.com"/> <route> 174.7. Consumer Example In this sample we define a route that exposes a HTTP service at http://localhost:8080/myapp/myservice : Usage of localhost When you specify localhost in a URL, Camel exposes the endpoint only on the local TCP/IP network interface, so it cannot be accessed from outside the machine it operates on. If you need to expose a Jetty endpoint on a specific network interface, the numerical IP address of this interface should be used as the host. If you need to expose a Jetty endpoint on all network interfaces, the 0.0.0.0 address should be used. To listen across an entire URI prefix, see How do I let Jetty match wildcards . If you actually want to expose routes by HTTP and already have a Servlet, you should instead refer to the Servlet Transport . Our business logic is implemented in the MyBookService class, which accesses the HTTP request contents and then returns a response. Note: The assert call appears in this example, because the code is part of an unit test. The following sample shows a content-based route that routes all requests containing the URI parameter, one , to the endpoint, mock:one , and all others to mock:other . So if a client sends the HTTP request, http://serverUri?one=hello , the Jetty component will copy the HTTP request parameter, one to the exchange's in.header . We can then use the simple language to route exchanges that contain this header to a specific endpoint and all others to another. If we used a language more powerful than Simple (such as OGNL ) we could also test for the parameter value and do routing based on the header value as well. 174.8. Session Support The session support option, sessionSupport , can be used to enable a HttpSession object and access the session object while processing the exchange. For example, the following route enables sessions: <route> <from uri="jetty:http://0.0.0.0/myapp/myservice/?sessionSupport=true"/> <processRef ref="myCode"/> <route> The myCode Processor can be instantiated by a Spring bean element: <bean id="myCode"class="com.mycompany.MyCodeProcessor"/> Where the processor implementation can access the HttpSession as follows: public void process(Exchange exchange) throws Exception { HttpSession session = exchange.getIn(HttpMessage.class).getRequest().getSession(); ... } 174.9. SSL Support (HTTPS) Using the JSSE Configuration Utility As of Camel 2.8, the Jetty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Jetty component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); JettyComponent jettyComponent = getContext().getComponent("jetty", JettyComponent.class); jettyComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="jetty:https://127.0.0.1/mail/?sslContextParameters=#sslContextParameters"/> ... Configuring Jetty Directly Jetty provides SSL support out of the box. To enable Jetty to run in SSL mode, simply format the URI with the https:// prefix---for example: <from uri="jetty:https://0.0.0.0/myapp/myservice/"/> Jetty also needs to know where to load your keystore from and what passwords to use in order to load the correct SSL certificate. Set the following JVM System Properties: until Camel 2.2 jetty.ssl.keystore specifies the location of the Java keystore file, which contains the Jetty server's own X.509 certificate in a key entry . A key entry stores the X.509 certificate (effectively, the public key ) and also its associated private key. jetty.ssl.password the store password, which is required to access the keystore file (this is the same password that is supplied to the keystore command's -storepass option). jetty.ssl.keypassword the key password, which is used to access the certificate's key entry in the keystore (this is the same password that is supplied to the keystore command's -keypass option). from Camel 2.3 onwards org.eclipse.jetty.ssl.keystore specifies the location of the Java keystore file, which contains the Jetty server's own X.509 certificate in a key entry . A key entry stores the X.509 certificate (effectively, the public key ) and also its associated private key. org.eclipse.jetty.ssl.password the store password, which is required to access the keystore file (this is the same password that is supplied to the keystore command's -storepass option). org.eclipse.jetty.ssl.keypassword the key password, which is used to access the certificate's key entry in the keystore (this is the same password that is supplied to the keystore command's -keypass option). For details of how to configure SSL on a Jetty endpoint, read the following documentation at the Jetty Site: http://docs.codehaus.org/display/JETTY/How+to+configure+SSL Some SSL properties aren't exposed directly by Camel, however Camel does expose the underlying SslSocketConnector, which will allow you to set properties like needClientAuth for mutual authentication requiring a client certificate or wantClientAuth for mutual authentication where a client doesn't need a certificate but can have one. There's a slight difference between the various Camel versions: Up to Camel 2.2 <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="sslSocketConnectors"> <map> <entry key="8043"> <bean class="org.mortbay.jetty.security.SslSocketConnector"> <property name="password"value="..."/> <property name="keyPassword"value="..."/> <property name="keystore"value="..."/> <property name="needClientAuth"value="..."/> <property name="truststore"value="..."/> </bean> </entry> </map> </property> </bean> Camel 2.3, 2.4 <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="sslSocketConnectors"> <map> <entry key="8043"> <bean class="org.eclipse.jetty.server.ssl.SslSocketConnector"> <property name="password"value="..."/> <property name="keyPassword"value="..."/> <property name="keystore"value="..."/> <property name="needClientAuth"value="..."/> <property name="truststore"value="..."/> </bean> </entry> </map> </property> </bean> *From Camel 2.5 we switch to use SslSelectChannelConnector * <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="sslSocketConnectors"> <map> <entry key="8043"> <bean class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <property name="password"value="..."/> <property name="keyPassword"value="..."/> <property name="keystore"value="..."/> <property name="needClientAuth"value="..."/> <property name="truststore"value="..."/> </bean> </entry> </map> </property> </bean> The value you use as keys in the above map is the port you configure Jetty to listen on. 174.9.1. Configuring camel-jetty9 with TLS security on IBM Java The default TLS security settings in the camel-jetty9 component are not compatible with the IBM Java VM. All ciphers in IBM Java starts with prefix SSL_* , even ciphers for TLS protocol startes with SSL_* . camel-jetty9 supports only RFC Cipher Suite names and all SSL_* cipher are not secured and are excluded. Jetty excludes all SSL_* ciphers, so there is no negotiable cipher usable for TLS 1.2 and connection fails. As there is no way to change behavior of Jetty's ssl context, only workaround is to override the default TLS security configuration on the Jetty9 component. To achieve this, add the following code at the end of the method "sslContextParameters()" in the Application.java file. FilterParameters fp = new FilterParameters(); fp.getInclude().add(".*"); // Exclude weak / insecure ciphers fp.getExclude().add("^.*_(MD5|SHA|SHA1)USD"); // Exclude ciphers that don't support forward secrecy fp.getExclude().add("^TLS_RSA_.*USD"); // The following exclusions are present to cleanup known bad cipher // suites that may be accidentally included via include patterns. // The default enabled cipher list in Java will not include these // (but they are available in the supported list). /* SSL_ ciphers are not excluded fp.getExclude().add("^SSL_.*USD"); */ fp.getExclude().add("^.NULL.USD"); fp.getExclude().add("^.anon.USD"); p.setCipherSuitesFilter(fp); This code overrides excluded ciphers defined in Jetty by removing exclusion of all SSL_* ciphers. 174.9.2. Configuring general SSL properties Available as of Camel 2.5 Instead of a per port number specific SSL socket connector (as shown above) you can now configure general properties which applies for all SSL socket connectors (which is not explicit configured as above with the port number as entry). <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="sslSocketConnectorProperties"> <map> <entry key="password"value="..."/> <entry key="keyPassword"value="..."/> <entry key="keystore"value="..."/> <entry key="needClientAuth"value="..."/> <entry key="truststore"value="..."/> </map> </property> </bean> 174.9.3. How to obtain reference to the X509Certificate Jetty stores a reference to the certificate in the HttpServletRequest which you can access from code as follows: HttpServletRequest req = exchange.getIn().getBody(HttpServletRequest.class); X509Certificate cert = (X509Certificate) req.getAttribute("javax.servlet.request.X509Certificate") 174.9.4. Configuring general HTTP properties Available as of Camel 2.5 Instead of a per port number specific HTTP socket connector (as shown above) you can now configure general properties which applies for all HTTP socket connectors (which is not explicit configured as above with the port number as entry). <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="socketConnectorProperties"> <map> <entry key="acceptors" value="4"/> <entry key="maxIdleTime" value="300000"/> </map> </property> </bean> 174.9.5. Obtaining X-Forwarded-For header with HttpServletRequest.getRemoteAddr() If the HTTP requests are handled by an Apache server and forwarded to jetty with mod_proxy, the original client IP address is in the X-Forwarded-For header and the HttpServletRequest.getRemoteAddr() will return the address of the Apache proxy. Jetty has a forwarded property which takes the value from X-Forwarded-For and places it in the HttpServletRequest remoteAddr property. This property is not available directly through the endpoint configuration but it can be easily added using the socketConnectors property: <bean id="jetty" class="org.apache.camel.component.jetty.JettyHttpComponent"> <property name="socketConnectors"> <map> <entry key="8080"> <bean class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <property name="forwarded" value="true"/> </bean> </entry> </map> </property> </bean> This is particularly useful when an existing Apache server handles TLS connections for a domain and proxies them to application servers internally. 174.10. Default behavior for returning HTTP status codes The default behavior of HTTP status codes is defined by the org.apache.camel.component.http.DefaultHttpBinding class, which handles how a response is written and also sets the HTTP status code. If the exchange was processed successfully, the 200 HTTP status code is returned. If the exchange failed with an exception, the 500 HTTP status code is returned, and the stacktrace is returned in the body. If you want to specify which HTTP status code to return, set the code in the Exchange.HTTP_RESPONSE_CODE header of the OUT message. 174.11. Customizing HttpBinding By default, Camel uses the org.apache.camel.component.http.DefaultHttpBinding to handle how a response is written. If you like, you can customize this behavior either by implementing your own HttpBinding class or by extending DefaultHttpBinding and overriding the appropriate methods. The following example shows how to customize the DefaultHttpBinding in order to change how exceptions are returned: We can then create an instance of our binding and register it in the Spring registry as follows: <bean id="mybinding"class="com.mycompany.MyHttpBinding"/> And then we can reference this binding when we define the route: <route> <from uri="jetty:http://0.0.0.0:8080/myapp/myservice?httpBindingRef=mybinding"/> <to uri="bean:doSomething"/> </route> 174.12. Jetty handlers and security configuration You can configure a list of Jetty handlers on the endpoint, which can be useful for enabling advanced Jetty security features. These handlers are configured in Spring XML as follows: <-- Jetty Security handling --> <bean id="userRealm" class="org.mortbay.jetty.plus.jaas.JAASUserRealm"> <property name="name" value="tracker-users"/> <property name="loginModuleName" value="ldaploginmodule"/> </bean> <bean id="constraint" class="org.mortbay.jetty.security.Constraint"> <property name="name" value="BASIC"/> <property name="roles" value="tracker-users"/> <property name="authenticate" value="true"/> </bean> <bean id="constraintMapping" class="org.mortbay.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.mortbay.jetty.security.SecurityHandler"> <property name="userRealm" ref="userRealm"/> <property name="constraintMappings" ref="constraintMapping"/> </bean> And from Camel 2.3 onwards you can configure a list of Jetty handlers as follows: <-- Jetty Security handling --> <bean id="constraint" class="org.eclipse.jetty.http.security.Constraint"> <property name="name" value="BASIC"/> <property name="roles" value="tracker-users"/> <property name="authenticate" value="true"/> </bean> <bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator"> <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/> </property> <property name="constraintMappings"> <list> <ref bean="constraintMapping"/> </list> </property> </bean> You can then define the endpoint as: from("jetty:http://0.0.0.0:9080/myservice?handlers=securityHandler") If you need more handlers, set the handlers option equal to a comma-separated list of bean IDs. 174.13. How to return a custom HTTP 500 reply message You may want to return a custom reply message when something goes wrong, instead of the default reply message Camel Jetty replies with. You could use a custom HttpBinding to be in control of the message mapping, but often it may be easier to use Camel's Exception Clause to construct the custom reply message. For example as show here, where we return Dude something went wrong with HTTP error code 500: 174.14. Multi-part Form support From Camel 2.3.0, camel-jetty support to multipart form post out of box. The submitted form-data are mapped into the message header. Camel-jetty creates an attachment for each uploaded file. The file name is mapped to the name of the attachment. The content type is set as the content type of the attachment file name. You can find the example here. Note: getName() functions as shown below in versions 2.5 and higher. In earlier versions you receive the temporary file name for the attachment instead 174.15. Jetty JMX support From Camel 2.3.0, camel-jetty supports the enabling of Jetty's JMX capabilities at the component and endpoint level with the endpoint configuration taking priority. Note that JMX must be enabled within the Camel context in order to enable JMX support in this component as the component provides Jetty with a reference to the MBeanServer registered with the Camel context. Because the camel-jetty component caches and reuses Jetty resources for a given protocol/host/port pairing, this configuration option will only be evaluated during the creation of the first endpoint to use a protocol/host/port pairing. For example, given two routes created from the following XML fragments, JMX support would remain enabled for all endpoints listening on "https://0.0.0.0". <from uri="jetty:https://0.0.0.0/myapp/myservice1/?enableJmx=true"/> <from uri="jetty:https://0.0.0.0/myapp/myservice2/?enableJmx=false"/> The camel-jetty component also provides for direct configuration of the Jetty MBeanContainer. Jetty creates MBean names dynamically. If you are running another instance of Jetty outside of the Camel context and sharing the same MBeanServer between the instances, you can provide both instances with a reference to the same MBeanContainer in order to avoid name collisions when registering Jetty MBeans. | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jetty</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"jetty:http://hostname[:port][/resourceUri][?options]",
"jetty:httpUri",
"from(\"direct:start\") .to(\"jetty://http://www.google.com\");",
"<route> <from uri=\"direct:start\"/> <to uri=\"jetty://http://www.google.com\"/> <route>",
"<route> <from uri=\"jetty:http://0.0.0.0/myapp/myservice/?sessionSupport=true\"/> <processRef ref=\"myCode\"/> <route>",
"<bean id=\"myCode\"class=\"com.mycompany.MyCodeProcessor\"/>",
"public void process(Exchange exchange) throws Exception { HttpSession session = exchange.getIn(HttpMessage.class).getRequest().getSession(); }",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); JettyComponent jettyComponent = getContext().getComponent(\"jetty\", JettyComponent.class); jettyComponent.setSslContextParameters(scp);",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"jetty:https://127.0.0.1/mail/?sslContextParameters=#sslContextParameters\"/>",
"<from uri=\"jetty:https://0.0.0.0/myapp/myservice/\"/>",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"sslSocketConnectors\"> <map> <entry key=\"8043\"> <bean class=\"org.mortbay.jetty.security.SslSocketConnector\"> <property name=\"password\"value=\"...\"/> <property name=\"keyPassword\"value=\"...\"/> <property name=\"keystore\"value=\"...\"/> <property name=\"needClientAuth\"value=\"...\"/> <property name=\"truststore\"value=\"...\"/> </bean> </entry> </map> </property> </bean>",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"sslSocketConnectors\"> <map> <entry key=\"8043\"> <bean class=\"org.eclipse.jetty.server.ssl.SslSocketConnector\"> <property name=\"password\"value=\"...\"/> <property name=\"keyPassword\"value=\"...\"/> <property name=\"keystore\"value=\"...\"/> <property name=\"needClientAuth\"value=\"...\"/> <property name=\"truststore\"value=\"...\"/> </bean> </entry> </map> </property> </bean>",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"sslSocketConnectors\"> <map> <entry key=\"8043\"> <bean class=\"org.eclipse.jetty.server.ssl.SslSelectChannelConnector\"> <property name=\"password\"value=\"...\"/> <property name=\"keyPassword\"value=\"...\"/> <property name=\"keystore\"value=\"...\"/> <property name=\"needClientAuth\"value=\"...\"/> <property name=\"truststore\"value=\"...\"/> </bean> </entry> </map> </property> </bean>",
"FilterParameters fp = new FilterParameters(); fp.getInclude().add(\".*\"); // Exclude weak / insecure ciphers fp.getExclude().add(\"^.*_(MD5|SHA|SHA1)USD\"); // Exclude ciphers that don't support forward secrecy fp.getExclude().add(\"^TLS_RSA_.*USD\"); // The following exclusions are present to cleanup known bad cipher // suites that may be accidentally included via include patterns. // The default enabled cipher list in Java will not include these // (but they are available in the supported list). /* SSL_ ciphers are not excluded fp.getExclude().add(\"^SSL_.*USD\"); */ fp.getExclude().add(\"^.NULL.USD\"); fp.getExclude().add(\"^.anon.USD\"); p.setCipherSuitesFilter(fp);",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"sslSocketConnectorProperties\"> <map> <entry key=\"password\"value=\"...\"/> <entry key=\"keyPassword\"value=\"...\"/> <entry key=\"keystore\"value=\"...\"/> <entry key=\"needClientAuth\"value=\"...\"/> <entry key=\"truststore\"value=\"...\"/> </map> </property> </bean>",
"HttpServletRequest req = exchange.getIn().getBody(HttpServletRequest.class); X509Certificate cert = (X509Certificate) req.getAttribute(\"javax.servlet.request.X509Certificate\")",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"socketConnectorProperties\"> <map> <entry key=\"acceptors\" value=\"4\"/> <entry key=\"maxIdleTime\" value=\"300000\"/> </map> </property> </bean>",
"<bean id=\"jetty\" class=\"org.apache.camel.component.jetty.JettyHttpComponent\"> <property name=\"socketConnectors\"> <map> <entry key=\"8080\"> <bean class=\"org.eclipse.jetty.server.nio.SelectChannelConnector\"> <property name=\"forwarded\" value=\"true\"/> </bean> </entry> </map> </property> </bean>",
"<bean id=\"mybinding\"class=\"com.mycompany.MyHttpBinding\"/>",
"<route> <from uri=\"jetty:http://0.0.0.0:8080/myapp/myservice?httpBindingRef=mybinding\"/> <to uri=\"bean:doSomething\"/> </route>",
"<-- Jetty Security handling --> <bean id=\"userRealm\" class=\"org.mortbay.jetty.plus.jaas.JAASUserRealm\"> <property name=\"name\" value=\"tracker-users\"/> <property name=\"loginModuleName\" value=\"ldaploginmodule\"/> </bean> <bean id=\"constraint\" class=\"org.mortbay.jetty.security.Constraint\"> <property name=\"name\" value=\"BASIC\"/> <property name=\"roles\" value=\"tracker-users\"/> <property name=\"authenticate\" value=\"true\"/> </bean> <bean id=\"constraintMapping\" class=\"org.mortbay.jetty.security.ConstraintMapping\"> <property name=\"constraint\" ref=\"constraint\"/> <property name=\"pathSpec\" value=\"/*\"/> </bean> <bean id=\"securityHandler\" class=\"org.mortbay.jetty.security.SecurityHandler\"> <property name=\"userRealm\" ref=\"userRealm\"/> <property name=\"constraintMappings\" ref=\"constraintMapping\"/> </bean>",
"<-- Jetty Security handling --> <bean id=\"constraint\" class=\"org.eclipse.jetty.http.security.Constraint\"> <property name=\"name\" value=\"BASIC\"/> <property name=\"roles\" value=\"tracker-users\"/> <property name=\"authenticate\" value=\"true\"/> </bean> <bean id=\"constraintMapping\" class=\"org.eclipse.jetty.security.ConstraintMapping\"> <property name=\"constraint\" ref=\"constraint\"/> <property name=\"pathSpec\" value=\"/*\"/> </bean> <bean id=\"securityHandler\" class=\"org.eclipse.jetty.security.ConstraintSecurityHandler\"> <property name=\"authenticator\"> <bean class=\"org.eclipse.jetty.security.authentication.BasicAuthenticator\"/> </property> <property name=\"constraintMappings\"> <list> <ref bean=\"constraintMapping\"/> </list> </property> </bean>",
"from(\"jetty:http://0.0.0.0:9080/myservice?handlers=securityHandler\")",
"<from uri=\"jetty:https://0.0.0.0/myapp/myservice1/?enableJmx=true\"/>",
"<from uri=\"jetty:https://0.0.0.0/myapp/myservice2/?enableJmx=false\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jetty-component |
6.14.5. Configuring Redundant Ring Protocol | 6.14.5. Configuring Redundant Ring Protocol As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configuration of redundant ring protocol. When using redundant ring protocol, there are a variety of considerations you must take into account, as described in Section 8.6, "Configuring Redundant Ring Protocol" . To specify a second network interface to use for redundant ring protocol, you add an alternate name for the node using the --addalt option of the ccs command: For example, the following command configures the alternate name clusternet-node1-eth2 for the cluster node clusternet-node1-eth1 : Optionally, you can manually specify a multicast address, a port, and a TTL for the second ring. If you specify a multicast for the second ring, either the alternate multicast address or the alternate port must be different from the multicast address for the first ring. If you specify an alternate port, the port numbers of the first ring and the second ring must differ by at least two, since the system itself uses port and port-1 to perform operations. If you do not specify an alternate multicast address, the system will automatically use a different multicast address for the second ring. To specify an alternate multicast address, port, or TTL for the second ring, you use the --setaltmulticast option of the ccs command: For example, the following command sets an alternate multicast address of 239.192.99.88, a port of 888, and a TTL of 3 for the cluster defined in the cluster.conf file on node clusternet-node1-eth1 : To remove an alternate multicast address, specify the --setaltmulticast option of the ccs command but do not specify a multicast address. Note that executing this command resets all other properties that you can set with the --setaltmulticast option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . When you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | [
"ccs -h host --addalt node_name alt_name",
"ccs -h clusternet-node1-eth1 --addalt clusternet-node1-eth1 clusternet-node1-eth2",
"ccs -h host --setaltmulticast [ alt_multicast_address ] [ alt_multicast_options ].",
"ccs -h clusternet-node1-eth1 --setaltmulticast 239.192.99.88 port=888 ttl=3"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-rrp-ccs-CA |
Chapter 6. Subscriptions | Chapter 6. Subscriptions 6.1. Subscription offerings Red Hat OpenShift Data Foundation subscription is based on "core-pairs," similar to Red Hat OpenShift Container Platform. The Red Hat OpenShift Data Foundation 2-core subscription is based on the number of logical cores on the CPUs in the system where OpenShift Container Platform runs. As with OpenShift Container Platform: OpenShift Data Foundation subscriptions are stackable to cover larger hosts. Cores can be distributed across as many virtual machines (VMs) as needed. For example, ten 2-core subscriptions will provide 20 cores and in case of IBM Power a 2-core subscription at SMT level of 8 will provide 2 cores or 16 vCPUs that can be used across any number of VMs. OpenShift Data Foundation subscriptions are available with Premium or Standard support. 6.2. Disaster recovery subscriptions Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 6.3. Cores versus vCPUs and hyperthreading Making a determination about whether or not a particular system consumes one or more cores is currently dependent on whether or not that system has hyperthreading available. Hyperthreading is only a feature of Intel CPUs. Visit the Red Hat Customer Portal to determine whether a particular system supports hyperthreading. For systems where hyperthreading is enabled and where one hyperthread equates to one visible system core, the calculation of cores is a ratio of 2 cores to 4 vCPUs. Therefore, a 2-core subscription covers 4 vCPUs in a hyperthreaded system. A large virtual machine (VM) might have 8 vCPUs, equating to 4 subscription cores. As subscriptions come in 2-core units, you will need two 2-core subscriptions to cover these 4 cores or 8 vCPUs. Where hyperthreading is not enabled, and where each visible system core correlates directly to an underlying physical core, the calculation of cores is a ratio of 2 cores to 2 vCPUs. 6.3.1. Cores versus vCPUs and simultaneous multithreading (SMT) for IBM Power Making a determination about whether or not a particular system consumes one or more cores is currently dependent on the level of simultaneous multithreading configured (SMT). IBM Power provide simultaneous multithreading levels of 1, 2, 4 or 8 for each core which correspond to the number of vCPUs as in the table below. Table 6.1. Different SMT levels and their corresponding vCPUs SMT level SMT=1 SMT=2 SMT=4 SMT=8 1 Core # vCPUs=1 # vCPUs=2 # vCPUs=4 # vCPUs=8 2 Cores # vCPUs=2 # vCPUs=4 # vCPUs=8 # vCPUs=16 4 Cores # vCPUs=4 # vCPUs=8 # vCPUs=16 # vCPUs=32 For systems where SMT is configured the calculation for the number of cores required for subscription purposes depends on the SMT level. Therefore, a 2-core subscription corresponds to 2 vCPUs on SMT level of 1, and to 4 vCPUs on SMT level of 2, and to 8 vCPUs on SMT level of 4 and to 16 vCPUs on SMT level of 8 as seen in the table above. A large virtual machine (VM) might have 16 vCPUs, which at a SMT level 8 will require a 2 core subscription based on dividing the # of vCPUs by the SMT level (16 vCPUs / 8 for SMT-8 = 2). As subscriptions come in 2-core units, you will need one 2-core subscription to cover these 2 cores or 16 vCPUs. 6.4. Splitting cores Systems that require an odd number of cores need to consume a full 2-core subscription. For example, a system that is calculated to require only 1 core will end up consuming a full 2-core subscription once it is registered and subscribed. When a single virtual machine (VM) with 2 vCPUs uses hyperthreading resulting in 1 calculated vCPU, a full 2-core subscription is required; a single 2-core subscription may not be split across two VMs with 2 vCPUs using hyperthreading. See section Cores versus vCPUs and hyperthreading for more information. It is recommended that virtual instances be sized so that they require an even number of cores. 6.4.1. Shared Processor Pools for IBM Power IBM Power has a notion of shared processor pools. The processors in a shared processor pool can be shared across the nodes in the cluster. The aggregate compute capacity required for a Red Hat OpenShift Data Foundation should be a multiple of core-pairs. 6.5. Subscription requirements Red Hat OpenShift Data Foundation components can run on either OpenShift Container Platform worker or infrastructure nodes, for which you can use either Red Hat CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 8.4 as the host operating system. RHEL 7 is now deprecated. OpenShift Data Foundation subscriptions are required for every OpenShift Container Platform subscribed core with a ratio of 1:1. When using infrastructure nodes, the rule to subscribe all OpenShift worker node cores for OpenShift Data Foundation applies even though they don't need any OpenShift Container Platform or any OpenShift Data Foundation subscriptions. You can use labels to state whether a node is a worker or an infrastructure node. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and Allocating Storage Resources guide. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/planning_your_deployment/subscriptions_rhodf |
Chapter 201. Kubernetes Namespaces Component | Chapter 201. Kubernetes Namespaces Component Available as of Camel version 2.17 The Kubernetes Namespaces component is one of Kubernetes Components which provides a producer to execute kubernetes namespace operations and a consumer to consume kubernetes namespace events. 201.1. Component Options The Kubernetes Namespaces component has no options. 201.2. Endpoint Options The Kubernetes Namespaces endpoint is configured using URI syntax: with the following path and query parameters: 201.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 201.2.2. Query Parameters (28 parameters): Name Description Default Type apiVersion (common) The Kubernetes API Version to use String dnsDomain (common) The dns domain, used for ServiceCall EIP String kubernetesClient (common) Default KubernetesClient to use if provided KubernetesClient portName (common) The port name, used for ServiceCall EIP String portProtocol (common) The port protocol, used for ServiceCall EIP tcp String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean labelKey (consumer) The Consumer Label key when watching at some resources String labelValue (consumer) The Consumer Label value when watching at some resources String namespace (consumer) The namespace String poolSize (consumer) The Consumer pool size 1 int resourceName (consumer) The Consumer Resource Name we would like to watch String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern operation (producer) Producer operation to do on Kubernetes String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 201.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean | [
"kubernetes-namespaces:masterUrl"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-namespaces-component |
6.8. Dynamic VDB ZIP Deployment | 6.8. Dynamic VDB ZIP Deployment For more complicated scenarios you can deploy a VDB via a ZIP file similar. In a VDB ZIP deployment: The deployment must end with the extension .vdb . The VDB XML file must be named vdb.xml and placed in the ZIP under the META-INF directory. If a lib folder exists, any JARs found underneath will automatically be added to the VDB classpath. For backwards compatibility with Teiid Designer VDBs, if any .INDEX file exists, the default metadata repository will be assumed to be INDEX. Files within the VDB ZIP are accessible by a Custom Metadata Repository using the MetadataFactory.getVDBResources() method, which returns a map of all VDBResources in the VDB keyed by absolute path relative to the VDB root. See Red Hat JBoss Data Virtualization Development Guide: Server Development for more information about custom metadata repositories. The built-in DDL-FILE metadata repository type may be used to define DDL-based metadata in files outside of the vdb.xml . This improves the memory footprint of the VDB metadata and the maintainability of vdb.xml . Example 6.2. Example VDB Zip Structure In the above example the vdb.xml could use a DDL-FILE metadata type for schema1 : | [
"/META-INF vdb.xml /ddl schema1.ddl /lib some-udf.jar",
"<model name=\"schema1\" <metadata type=\"DDL-FILE\">/ddl/schema1.ddl<metadata> </model>"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/dynamic_vdb_zip_deployment |
Chapter 24. Scheduler [config.openshift.io/v1] | Chapter 24. Scheduler [config.openshift.io/v1] Description Scheduler holds cluster-wide config information to run the Kubernetes Scheduler and influence its placement decisions. The canonical name for this config is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 24.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description defaultNodeSelector string defaultNodeSelector helps set the cluster-wide default node selector to restrict pod placement to specific nodes. This is applied to the pods created in all namespaces and creates an intersection with any existing nodeSelectors already set on a pod, additionally constraining that pod's selector. For example, defaultNodeSelector: "type=user-node,region=east" would set nodeSelector field in pod spec to "type=user-node,region=east" to all pods created in all namespaces. Namespaces having project-wide node selectors won't be impacted even if this field is set. This adds an annotation section to the namespace. For example, if a new namespace is created with node-selector='type=user-node,region=east', the annotation openshift.io/node-selector: type=user-node,region=east gets added to the project. When the openshift.io/node-selector annotation is set on the project the value is used in preference to the value we are setting for defaultNodeSelector field. For instance, openshift.io/node-selector: "type=user-node,region=west" means that the default of "type=user-node,region=east" set in defaultNodeSelector would not be applied. mastersSchedulable boolean MastersSchedulable allows masters nodes to be schedulable. When this flag is turned on, all the master nodes in the cluster will be made schedulable, so that workload pods can run on them. The default value for this field is false, meaning none of the master nodes are schedulable. Important Note: Once the workload pods start running on the master nodes, extreme care must be taken to ensure that cluster-critical control plane components are not impacted. Please turn on this field after doing due diligence. policy object DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. profile string profile sets which scheduling profile should be set in order to configure scheduling decisions for new pods. Valid values are "LowNodeUtilization", "HighNodeUtilization", "NoScoring" Defaults to "LowNodeUtilization" 24.1.2. .spec.policy Description DEPRECATED: the scheduler Policy API has been deprecated and will be removed in a future release. policy is a reference to a ConfigMap containing scheduler policy which has user specified predicates and priorities. If this ConfigMap is not available scheduler will default to use DefaultAlgorithmProvider. The namespace for this configmap is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 24.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object 24.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/schedulers DELETE : delete collection of Scheduler GET : list objects of kind Scheduler POST : create a Scheduler /apis/config.openshift.io/v1/schedulers/{name} DELETE : delete a Scheduler GET : read the specified Scheduler PATCH : partially update the specified Scheduler PUT : replace the specified Scheduler /apis/config.openshift.io/v1/schedulers/{name}/status GET : read status of the specified Scheduler PATCH : partially update status of the specified Scheduler PUT : replace status of the specified Scheduler 24.2.1. /apis/config.openshift.io/v1/schedulers HTTP method DELETE Description delete collection of Scheduler Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Scheduler Table 24.2. HTTP responses HTTP code Reponse body 200 - OK SchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a Scheduler Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body Scheduler schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 202 - Accepted Scheduler schema 401 - Unauthorized Empty 24.2.2. /apis/config.openshift.io/v1/schedulers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method DELETE Description delete a Scheduler Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Scheduler Table 24.9. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Scheduler Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Scheduler Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body Scheduler schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty 24.2.3. /apis/config.openshift.io/v1/schedulers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the Scheduler HTTP method GET Description read status of the specified Scheduler Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Scheduler Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Scheduler Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Scheduler schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK Scheduler schema 201 - Created Scheduler schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/scheduler-config-openshift-io-v1 |
function::get_loadavg_index | function::get_loadavg_index Name function::get_loadavg_index - Get the load average for a specified interval Synopsis Arguments indx The load average interval to capture. Description This function returns the load average at a specified interval. The three load average values 1, 5 and 15 minute average corresponds to indexes 0, 1 and 2 of the avenrun array - see linux/sched.h. Please note that the truncated-integer portion of the load average is returned. If the specified index is out-of-bounds, then an error message and exception is thrown. | [
"get_loadavg_index:long(indx:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-get-loadavg-index |
Chapter 7. HostFirmwareComponents [metal3.io/v1alpha1] | Chapter 7. HostFirmwareComponents [metal3.io/v1alpha1] Description HostFirmwareComponents is the Schema for the hostfirmwarecomponents API. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareComponentsSpec defines the desired state of HostFirmwareComponents. status object HostFirmwareComponentsStatus defines the observed state of HostFirmwareComponents. 7.1.1. .spec Description HostFirmwareComponentsSpec defines the desired state of HostFirmwareComponents. Type object Required updates Property Type Description updates array updates[] object FirmwareUpdate defines a firmware update specification. 7.1.2. .spec.updates Description Type array 7.1.3. .spec.updates[] Description FirmwareUpdate defines a firmware update specification. Type object Required component url Property Type Description component string url string 7.1.4. .status Description HostFirmwareComponentsStatus defines the observed state of HostFirmwareComponents. Type object Property Type Description components array Components is the list of all available firmware components and their information. components[] object FirmwareComponentStatus defines the status of a firmware component. conditions array Track whether updates stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. lastUpdated string Time that the status was last updated updates array Updates is the list of all firmware components that should be updated they are specified via name and url fields. updates[] object FirmwareUpdate defines a firmware update specification. 7.1.5. .status.components Description Components is the list of all available firmware components and their information. Type array 7.1.6. .status.components[] Description FirmwareComponentStatus defines the status of a firmware component. Type object Required component initialVersion Property Type Description component string currentVersion string initialVersion string lastVersionFlashed string updatedAt string 7.1.7. .status.conditions Description Track whether updates stored in the spec are valid based on the schema Type array 7.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 7.1.9. .status.updates Description Updates is the list of all firmware components that should be updated they are specified via name and url fields. Type array 7.1.10. .status.updates[] Description FirmwareUpdate defines a firmware update specification. Type object Required component url Property Type Description component string url string 7.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwarecomponents GET : list objects of kind HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents DELETE : delete collection of HostFirmwareComponents GET : list objects of kind HostFirmwareComponents POST : create HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name} DELETE : delete HostFirmwareComponents GET : read the specified HostFirmwareComponents PATCH : partially update the specified HostFirmwareComponents PUT : replace the specified HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name}/status GET : read status of the specified HostFirmwareComponents PATCH : partially update status of the specified HostFirmwareComponents PUT : replace status of the specified HostFirmwareComponents 7.2.1. /apis/metal3.io/v1alpha1/hostfirmwarecomponents HTTP method GET Description list objects of kind HostFirmwareComponents Table 7.1. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponentsList schema 401 - Unauthorized Empty 7.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents HTTP method DELETE Description delete collection of HostFirmwareComponents Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareComponents Table 7.3. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponentsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareComponents Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 202 - Accepted HostFirmwareComponents schema 401 - Unauthorized Empty 7.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name} Table 7.7. Global path parameters Parameter Type Description name string name of the HostFirmwareComponents HTTP method DELETE Description delete HostFirmwareComponents Table 7.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareComponents Table 7.10. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareComponents Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.12. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareComponents Table 7.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.14. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.15. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 401 - Unauthorized Empty 7.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name}/status Table 7.16. Global path parameters Parameter Type Description name string name of the HostFirmwareComponents HTTP method GET Description read status of the specified HostFirmwareComponents Table 7.17. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareComponents Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.19. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareComponents Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.21. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/hostfirmwarecomponents-metal3-io-v1alpha1 |
Chapter 19. Authenticating KIE Server through RH-SSO | Chapter 19. Authenticating KIE Server through RH-SSO KIE Server provides a REST API for third-party clients. If you integrate KIE Server with RH-SSO, you can delegate third-party client identity management to the RH-SSO server. After you create a realm client for Red Hat Decision Manager and set up the RH-SSO client adapter for Red Hat JBoss EAP, you can set up RH-SSO authentication for KIE Server. Prerequisites RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Decision Manager users" . KIE Server is installed in a Red Hat JBoss EAP 7.4 instance, as described in Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . This chapter contains the following sections: Section 19.1, "Creating the KIE Server client on RH-SSO" Section 19.2, "Installing and configuring KIE Server with the client adapter" Section 19.3, "KIE Server token-based authentication" Note Except for Section 19.1, "Creating the KIE Server client on RH-SSO" , this section is intended for standalone installations. If you are integrating RH-SSO and Red Hat Decision Manager on Red Hat OpenShift Container Platform, complete the steps in Section 19.1, "Creating the KIE Server client on RH-SSO" and then deploy the Red Hat Decision Manager environment on Red Hat OpenShift Container Platform. For information about deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform, see Deploying Red Hat Decision Manager on Red Hat OpenShift Container Platform . 19.1. Creating the KIE Server client on RH-SSO Use the RH-SSO Admin Console to create a KIE Server client in an existing realm. Prerequisites KIE Server is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Decision Manager users" . Procedure In the RH-SSO Admin Console, open the security realm that you created in Chapter 16, Installing and configuring RH-SSO . Click Clients and click Create . The Add Client page opens. On the Add Client page, provide the required information to create a KIE Server client for your realm, then click Save . For example: Client ID : kie-execution-server Root URL : http:// localhost :8080/kie-server Client protocol : openid-connect Note If you are configuring RH-SSO with Red Hat OpenShift Container Platform, enter the URL that is exposed by the KIE Server routes. Your OpenShift administrator can provide this URL if necessary. The new client Access Type is set to public by default. Change it to confidential and click Save again. Navigate to the Credentials tab and copy the secret key. The secret key is required to configure the kie-execution-server client. Note The RH-SSO server client uses one URL to a single KIE Server deployment. The following error message might be displayed if there are two or more deployment configurations: We are sorry... Invalid parameter: redirect_uri To resolve this error, append /* to the Valid Redirect URIs field in the client configuration. On the Configure page, go to Clients > kie-execution-server > Settings , and append the Valid Redirect URIs field with /* , for example: 19.2. Installing and configuring KIE Server with the client adapter After you install RH-SSO, you must install the RH-SSO client adapter for Red Hat JBoss EAP and configure it for KIE Server. Prerequisites KIE Server is installed in a Red Hat JBoss EAP 7.4 server, as described in Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . RH-SSO is installed as described in Chapter 16, Installing and configuring RH-SSO . At least one user with the kie-server role has been added to RH-SSO as described in Section 17.1, "Adding Red Hat Decision Manager users" . Note If you deployed KIE Server to a different application server than Business Central, install and configure RH-SSO on your second server as well. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required) and then select the product and version from the drop-down options: Product: Red Hat Single Sign-On Version: 7.5 Download Red Hat Single Sign-On 7.5 Client Adapter for JBoss EAP 7 ( rh-sso-7.5.0-eap7-adapter.zip or the latest version). Extract and install the adapter zip file. For installation instructions, see the "JBoss EAP Adapter" section of the Red Hat Single Sign On Securing Applications and Services Guide . Go to EAP_HOME /standalone/configuration and open the standalone-full.xml file. Delete the <single-sign-on/> element from both of the files. Navigate to EAP_HOME /standalone/configuration directory in your Red Hat JBoss EAP installation and edit the standalone-full.xml file to add the RH-SSO subsystem configuration. For example: Navigate to EAP_HOME /standalone/configuration in your Red Hat JBoss EAP installation and edit the standalone-full.xml file to add the RH-SSO subsystem configuration. For example: <subsystem xmlns="urn:jboss:domain:keycloak:1.1"> <secure-deployment name="kie-server.war"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>kie-execution-server</resource> <enable-basic-auth>true</enable-basic-auth> <credential name="secret">03c2b267-7f64-4647-8566-572be673f5fa</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> <system-properties> <property name="org.kie.server.sync.deploy" value="false"/> </system-properties> In this example: secure-deployment name is the name of your application WAR file. realm is the name of the realm that you created for the applications to use. realm-public-key is the public key of the realm you created. You can find the key in the Keys tab in the Realm settings page of the realm you created in the RH-SSO Admin Console. If you do not provide a value for this public key, the server retrieves it automatically. auth-server-url is the URL for the RH-SSO authentication server. resource is the name for the server client that you created. enable-basic-auth is the setting to enable basic authentication mechanism, so that the clients can use both token-based and basic authentication approaches to perform the requests. credential name is the secret key of the server client you created. You can find the key in the Credentials tab on the Clients page of the RH-SSO Admin Console. principal-attribute is the attribute for displaying the user name in the application. If you do not provide this value, your User Id is displayed in the application instead of your user name. Save your configuration changes. Use the following command to restart the Red Hat JBoss EAP server and run KIE Server. For example: When KIE Server is running, enter the following command to check the server status, where <KIE_SERVER_USER> is a user with the kie-server role and <PASSWORD> is the password for that user: 19.3. KIE Server token-based authentication You can also use token-based authentication for communication between Red Hat Decision Manager and KIE Server. You can use the complete token as a system property of your application server, instead of the user name and password, for your applications. However, you must ensure that the token does not expire while the applications are interacting because the token is not automatically refreshed. To get the token, see Section 20.2, "Token-based authentication" . Procedure To configure Business Central to manage KIE Server using tokens: Set the org.kie.server.token property. Make sure that the org.kie.server.user and org.kie.server.pwd properties are not set. Red Hat Decision Manager will then use the Authorization: Bearer USDTOKEN authentication method. To use the REST API using the token-based authentication: Set the org.kie.server.controller.token property. Make sure that the org.kie.server.controller.user and org.kie.server.controller.pwd properties are not set. Note Because KIE Server is unable to refresh the token, use a high-lifespan token. A token's lifespan must not exceed January 19, 2038. Check with your security best practices to see whether this is a suitable solution for your environment. | [
"http://localhost:8080/kie-server/*",
"<subsystem xmlns=\"urn:jboss:domain:keycloak:1.1\"> <secure-deployment name=\"kie-server.war\"> <realm>demo</realm> <realm-public-key>MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB</realm-public-key> <auth-server-url>http://localhost:8180/auth</auth-server-url> <ssl-required>external</ssl-required> <resource>kie-execution-server</resource> <enable-basic-auth>true</enable-basic-auth> <credential name=\"secret\">03c2b267-7f64-4647-8566-572be673f5fa</credential> <principal-attribute>preferred_username</principal-attribute> </secure-deployment> </subsystem> <system-properties> <property name=\"org.kie.server.sync.deploy\" value=\"false\"/> </system-properties>",
"EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Dorg.kie.server.id=<ID> -Dorg.kie.server.user=<USER> -Dorg.kie.server.pwd=<PWD> -Dorg.kie.server.location=<LOCATION_URL> -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTOLLER_PASSWORD>",
"EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Dorg.kie.server.id=kieserver1 -Dorg.kie.server.user=kieserver -Dorg.kie.server.pwd=password -Dorg.kie.server.location=http://localhost:8080/kie-server/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/business-central/rest/controller -Dorg.kie.server.controller.user=kiecontroller -Dorg.kie.server.controller.pwd=password",
"curl http://<KIE_SERVER_USER>:<PASSWORD>@localhost:8080/kie-server/services/rest/server/"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/sso-kie-server-con_integrate-sso |
Chapter 1. OpenShift Container Platform storage overview | Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/storage/storage-overview |
30.3. Getting Started with VDO | 30.3. Getting Started with VDO 30.3.1. Introduction Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication, compression, and thin provisioning. When you set up a VDO volume, you specify a block device on which to construct your VDO volume and the amount of logical storage you plan to present. When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1 logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it as 10 TB of logical storage. For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage. In either case, you can simply put a file system on top of the logical device presented by VDO and then use it directly or as part of a distributed cloud storage architecture. This chapter describes the following use cases of VDO deployment: the direct-attached use case for virtualization servers, such as those built using Red Hat Virtualization, and the cloud storage use case for object-based distributed storage clusters, such as those built using Ceph Storage. Note VDO deployment with Ceph is currently not supported. This chapter provides examples for configuring VDO for use with a standard Linux file system that can be easily deployed for either use case; see the diagrams in Section 30.3.5, "Deployment Examples" . 30.3.2. Installing VDO VDO is deployed using the following RPM packages: vdo kmod-kvdo To install VDO, use the yum package manager to install the RPM packages: 30.3.3. Creating a VDO Volume Create a VDO volume for your block device. Note that multiple VDO volumes can be created for separate devices on the same machine. If you choose this approach, you must supply a different name and device for each instance of VDO on the system. Important Use expandable storage as the backing block device. For more information, see Section 30.2, "System Requirements" . In all the following steps, replace vdo_name with the identifier you want to use for your VDO volume; for example, vdo1 . Create the VDO volume using the VDO Manager: Replace block_device with the persistent name of the block device where you want to create the VDO volume. For example, /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f . Important Use a persistent device name. If you use a non-persistent device name, then VDO might fail to start properly in the future if the device name changes. For more information on persistent names, see Section 25.8, "Persistent Naming" . Replace logical_size with the amount of logical storage that the VDO volume should present: For active VMs or container storage, use logical size that is ten times the physical size of your block device. For example, if your block device is 1 TB in size, use 10T here. For object storage, use logical size that is three times the physical size of your block device. For example, if your block device is 1 TB in size, use 3T here. If the block device is larger than 16 TiB, add the --vdoSlabSize=32G to increase the slab size on the volume to 32 GiB. Using the default slab size of 2 GiB on block devices larger than 16 TiB results in the vdo create command failing with the following error: For more information, see Section 30.1.3, "VDO Volume" . Example 30.1. Creating VDO for Container Storage For example, to create a VDO volume for container storage on a 1 TB block device, you might use: When a VDO volume is created, VDO adds an entry to the /etc/vdoconf.yml configuration file. The vdo.service systemd unit then uses the entry to start the volume by default. Important If a failure occurs when creating the VDO volume, remove the volume to clean up. See Section 30.4.3.1, "Removing an Unsuccessfully Created Volume" for details. Create a file system: For the XFS file system: For the ext4 file system: Mount the file system: To configure the file system to mount automatically, use either the /etc/fstab file or a systemd mount unit: If you decide to use the /etc/fstab configuration file, add one of the following lines to the file: For the XFS file system: For the ext4 file system: Alternatively, if you decide to use a systemd unit, create a systemd mount unit file with the appropriate filename. For the mount point of your VDO volume, create the /etc/systemd/system/mnt- vdo_name .mount file with the following content: An example systemd unit file is also installed at /usr/share/doc/vdo/examples/systemd/VDO.mount.example . Enable the discard feature for the file system on your VDO device. Both batch and online operations work with VDO. For information on how to set up the discard feature, see Section 2.4, "Discard Unused Blocks" . 30.3.4. Monitoring VDO Because VDO is thin provisioned, the file system and applications will only see the logical space in use and will not be aware of the actual physical space available. VDO space usage and efficiency can be monitored using the vdostats utility: When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following: Important Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume. 30.3.5. Deployment Examples The following examples illustrate how VDO might be used in KVM and other deployments. VDO Deployment with KVM To see how VDO can be deployed successfully on a KVM server configured with Direct Attached Storage, see Figure 30.2, "VDO Deployment with KVM" . Figure 30.2. VDO Deployment with KVM More Deployment Scenarios For more information on VDO deployment, see Section 30.5, "Deployment Scenarios" . | [
"yum install vdo kmod-kvdo",
"vdo create --name= vdo_name --device= block_device --vdoLogicalSize= logical_size [ --vdoSlabSize= slab_size ]",
"vdo: ERROR - vdoformat: formatVDO failed on '/dev/ device ': VDO Status: Exceeds maximum number of slabs supported",
"vdo create --name=vdo1 --device=/dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f --vdoLogicalSize=10T",
"mkfs.xfs -K /dev/mapper/ vdo_name",
"mkfs.ext4 -E nodiscard /dev/mapper/ vdo_name",
"mkdir -m 1777 /mnt/ vdo_name # mount /dev/mapper/ vdo_name /mnt/ vdo_name",
"/dev/mapper/ vdo_name /mnt/ vdo_name xfs defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0",
"/dev/mapper/ vdo_name /mnt/ vdo_name ext4 defaults,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service 0 0",
"[Unit] Description = VDO unit file to mount file system name = vdo_name .mount Requires = vdo.service After = multi-user.target Conflicts = umount.target [Mount] What = /dev/mapper/ vdo_name Where = /mnt/ vdo_name Type = xfs [Install] WantedBy = multi-user.target",
"vdostats --human-readable Device 1K-blocks Used Available Use% Space saving% /dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73% /dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%",
"Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool vdo_name. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool vdo_name is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool vdo_name is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool vdo_name is now 96.07% full."
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-quick-start |
Chapter 20. Managing self-service rules using the IdM Web UI | Chapter 20. Managing self-service rules using the IdM Web UI Learn about self-service rules in Identity Management (IdM) and how to create and edit self-service access rules in the web interface (IdM Web UI). 20.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 20.2. Creating self-service rules using the IdM Web UI Follow this procedure to create self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click Add at the upper-right of the list of the self-service access rules: The Add Self Service Permission window opens. Enter the name of the new self-service rule in the Self-service name field. Spaces are allowed: Select the check boxes to the attributes you want users to be able to edit. Optional: If an attribute you want to provide access to is not listed, you can add a listing for it: Click the Add button. Enter the attribute name in the Attribute text field of the following Add Custom Attribute window. Click the OK button to add the attribute Verify that the new attribute is selected Click the Add button at the bottom of the form to save the new self-service rule. Alternatively, you can save and continue editing the self-service rule by clicking the Add and Edit button, or save and add further rules by clicking the Add and Add another button. 20.3. Editing self-service rules using the IdM Web UI Follow this procedure to edit self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Click on the name of the self-service rule you want to modify. The edit page only allows you to edit the list of attributes to you want to add or remove to the self-service rule. Select or deselect the appropriate check boxes. Click the Save button to save your changes to the self-service rule. 20.4. Deleting self-service rules using the IdM Web UI Follow this procedure to delete self-service access rules in IdM using the web interface (IdM Web UI). Prerequisites Administrator privileges for managing IdM or the User Administrator role. You are logged-in to the IdM Web UI. For details, see Accessing the IdM Web UI in a web browser . Procedure Open the Role-Based Access Control submenu in the IPA Server tab and select Self Service Permissions . Select the check box to the rule you want to delete, then click on the Delete button on the right of the list. A dialog opens, click on Delete to confirm. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-self-service-rules-in-idm-using-the-idm-web-ui_managing-users-groups-hosts |
Chapter 4. Technology preview | Chapter 4. Technology preview This section describes Technology Preview features in AMQ Broker 7.9. Important Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them for production. For more information, see Red Hat Technology Preview Features Support Scope . Quorum voting improvements In versions of AMQ Broker you needed to configure at least three live-backup pairs to use quorum voting to avoid having two live brokers when using replication high availability (HA) policy. Starting in 7.9, you can configure failover to use Apache Curator and Apache ZooKeeper to provide quorum voting using two brokers. For information about using this feature, see High Availability and Failover in the Apache ActiveMQ Artemis documentation. Client connection balancing improvements In releases, there was no method to balance client connections server-side. Starting in 7.9, you can specify pools of brokers and policies for balancing client connections. For example, you can specify a LEAST_CONNECTIONS policy that ensures that clients are redirected to brokers with the fewest active connections. For information about using this feature, see Broker Balancers in the Apache ActiveMQ Artemis documentation. Viewing brokers in Fuse Console You can configure an Operator-based broker deployment to use Fuse Console for OpenShift instead of AMQ Management Console. When you have configured your broker deployment appropriately, Fuse Console discovers the brokers and displays them on a dedicated Artemis tab. For more information, see Viewing brokers in Fuse Console in Deploying AMQ Broker on OpenShift . Note Viewing brokers in Fuse Console is a Technology Preview feature for Fuse 7.8 . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/release_notes_for_red_hat_amq_broker_7.9/tech_preview |
Installing Satellite Server in a Disconnected Network Environment | Installing Satellite Server in a Disconnected Network Environment Red Hat Satellite 6.11 Install Red Hat Satellite Server that is deployed inside a network without an Internet connection Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_disconnected_network_environment/index |
Installation overview | Installation overview OpenShift Container Platform 4.14 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_overview/index |
Preface | Preface These release notes contain important information related to Red Hat JBoss Enterprise Application Platform 8.0. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/release_notes_for_red_hat_jboss_enterprise_application_platform_8.0/pr01 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.