title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. Install Pacemaker | Chapter 4. Install Pacemaker Please refer to the following documentation to first set up a pacemaker cluster. Reference Document for the High Availability Add-On for Red Hat Enterprise Linux 7 Configuring and Managing High Availability Clusters on RHEL 8 Please make sure to follow the guidelines in Support Policies for RHEL High Availability Clusters - General Requirements for Fencing/STONITH for the fencing/STONITH setup. Information about the fencing/STONITH agents supported for different platforms are available at Cluster Platforms and Architectures . This guide will assume that the following things are working properly: Pacemaker cluster is configured according to documentation and has proper and working fencing Enqueue replication between the (A)SCS and ERS instances has been manually tested as explained in Setting up Enqueue Replication Server fail over The nodes are subscribed to the required channels as explained in RHEL for SAP Repositories and How to Enable Them 4.1. Configure general cluster properties To avoid unnecessary failovers of the resources during initial testing and post production, set the following default values for the resource-stickiness and migration-threshold parameters. Note that defaults do not apply to resources which override them with their own defined values. [root]# pcs resource defaults resource-stickiness=1 [root]# pcs resource defaults migration-threshold=3 Warning : As of RHEL 8.4 (pcs-0.10.8-1.el8), the commands above are deprecated. Use the commands below: +[source,text] [root]# pcs resource defaults update resource-stickiness=1 [root]# pcs resource defaults update migration-threshold=3 Notes : 1. It is sufficient to run the commands above on one node of the cluster. 2. The command resource-stickiness=1 will encourage the resource to stay running where it is, while migration-threshold=3 will cause the resource to move to a new node after 3 failures. 3 is generally sufficient in preventing the resource from prematurely failing over to another node. This also ensures that the resource failover time stays within a controllable limit. 4.2. Install resource-agents-sap on all cluster nodes [root]# yum install resource-agents-sap 4.3. Configure cluster resources for shared filesystems Configure shared filesystem to provide following mount points on all the cluster nodes. /sapmnt /usr/sap/trans /usr/sap/S4H/SYS 4.3.1. Configure shared filesystems managed by the cluster The cloned Filesystem cluster resource can be used to mount the shares from external NFS server on all cluster nodes as shown below. [root]# pcs resource create s4h_fs_sapmnt Filesystem \ device='<NFS_Server>:<sapmnt_nfs_share>' directory='/sapmnt' \ fstype='nfs' --clone interleave=true [root]# pcs resource create s4h_fs_sap_trans Filesystem \ device='<NFS_Server>:<sap_trans_nfs_share>' directory='/usr/sap/trans' \ fstype='nfs' --clone interleave=true [root]# pcs resource create s4h_fs_sap_sys Filesystem \ device='<NFS_Server>:<s4h_sys_nfs_share>' directory='/usr/sap/S4H/SYS' \ fstype='nfs' --clone interleave=true After creating the Filesystem resources verify that they have started properly on all nodes. [root]# pcs status ... Clone Set: s4h_fs_sapmnt-clone [s4h_fs_sapmnt] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sap_trans-clone [s4h_fs_sap_trans] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sys-clone [s4h_fs_sys] Started: [ s4node1 s4node2 ] ... 4.3.2. Configure shared filesystems managed outside of cluster In case that shared filesystems will NOT be managed by cluster, it must be ensured that they are available before the pacemaker service is started. In RHEL 7 due to systemd parallelization you must ensure that shared filesystems are started in resource-agents-deps target. More details on this can be found in documentation section 9.6. Configuring Startup Order for Resource Dependencies not Managed by Pacemaker (Red Hat Enterprise Linux 7.4 and later) . 4.4. Configure ASCS resource group 4.4.1. Create resource for virtual IP address [root]# pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.201 \ --group s4h_ASCS20_group 4.4.2. Create resource for ASCS filesystem. Below is the example of creating resource for NFS filesystem [root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='<NFS_Server>:<s4h_ascs20_nfs_share>' \ directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe \ --group s4h_ASCS20_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 \ op monitor interval=200 timeout=40 Below is the example of creating resources for HA-LVM filesystem [root]# pcs resource create s4h_fs_ascs20_lvm LVM \ volgrpname='<ascs_volume_group>' exclusive=true \ --group s4h_ASCS20_group [root]# pcs resource create s4h_fs_ascs20 Filesystem \ device='/dev/mapper/<ascs_logical_volume>' \ directory=/usr/sap/S4H/ASCS20 fstype=ext4 \ --group s4h_ASCS20_group 4.4.3. Create resource for ASCS instance [root]# pcs resource create s4h_ascs20 SAPInstance \ InstanceName="S4H_ASCS20_s4ascs" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_s4ascs \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \ --group s4h_ASCS20_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600 Note : meta resource-stickiness=5000 is here to balance out the failover constraint with ERS so the resource stays on the node where it started and doesn't migrate around the cluster uncontrollably. Add a resource stickiness to the group to ensure that the ASCS will stay on a node if possible: [root]# pcs resource meta s4h_ASCS20_group resource-stickiness=3000 4.5. Configure ERS resource group 4.5.1. Create resource for virtual IP address [root]# pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.202 \ --group s4h_ERS29_group 4.5.2. Create resource for ERS filesystem Below is the example of creating resource for NFS filesystem [root]# pcs resource create s4h_fs_ers29 Filesystem \ device='<NFS_Server>:<s4h_ers29_nfs_share>' \ directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe \ --group s4h_ERS29_group op start interval=0 timeout=60 \ op stop interval=0 timeout=120 op monitor interval=200 timeout=40 Below is the example of creating resources for HA-LVM filesystem [root]# pcs resource create s4h_fs_ers29_lvm LVM \ volgrpname='<ers_volume_group>' exclusive=true --group s4h_ERS29_group [root]# pcs resource create s4h_fs_ers29 Filesystem \ device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 \ fstype=ext4 --group s4h_ERS29_group 4.5.3. Create resource for ERS instance Create an ERS instance cluster resource. Note : In ENSA2 deployments the IS_ERS attribute is optional. To learn more about IS_ERS , additional information can be found in How does the IS_ERS attribute work on a SAP NetWeaver cluster with Standalone Enqueue Server (ENSA1 and ENSA2)? . [root]# pcs resource create s4h_ers29 SAPInstance \ InstanceName="S4H_ERS29_s4ers" \ START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_s4ers \ AUTOMATIC_RECOVER=false \ --group s4h_ERS29_group \ op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 \ op stop interval=0 timeout=600 4.6. Create constraints 4.6.1. Create colocation constraint for ASCS and ERS resource groups Resource groups s4h_ASCS20_group and s4h_ERS29_group should try to avoid running on the same node. Order of groups matters. [root]# pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group \ -5000 4.6.2. Create location constraint for ASCS resource ASCS20 instance rh2_ascs20 prefers to run on node where ERS was running before failover. # pcs constraint location rh2_ascs20 rule score=2000 runs_ers_RH2 eq 1 4.6.3. Create order constraint for ASCS and ERS resource groups Prefer to start s4h_ASCS20_group before the s4h_ERS29_group [root]# pcs constraint order start s4h_ASCS20_group then start \ s4h_ERS29_group symmetrical=false kind=Optional [root]# pcs constraint order start s4h_ASCS20_group then stop \ s4h_ERS29_group symmetrical=false kind=Optional 4.6.4. Create order constraint for /sapmnt resource managed by cluster If the shared filesystem /sapmnt is managed by the cluster, then the following constraints ensure that resource groups with ASCS and ERS SAPInstance resources are started only once the filesystem is available. [root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group [root]# pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group | [
"pcs resource defaults resource-stickiness=1 pcs resource defaults migration-threshold=3",
"pcs resource defaults update resource-stickiness=1 pcs resource defaults update migration-threshold=3",
"yum install resource-agents-sap",
"pcs resource create s4h_fs_sapmnt Filesystem device='<NFS_Server>:<sapmnt_nfs_share>' directory='/sapmnt' fstype='nfs' --clone interleave=true pcs resource create s4h_fs_sap_trans Filesystem device='<NFS_Server>:<sap_trans_nfs_share>' directory='/usr/sap/trans' fstype='nfs' --clone interleave=true pcs resource create s4h_fs_sap_sys Filesystem device='<NFS_Server>:<s4h_sys_nfs_share>' directory='/usr/sap/S4H/SYS' fstype='nfs' --clone interleave=true",
"pcs status Clone Set: s4h_fs_sapmnt-clone [s4h_fs_sapmnt] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sap_trans-clone [s4h_fs_sap_trans] Started: [ s4node1 s4node2 ] Clone Set: s4h_fs_sys-clone [s4h_fs_sys] Started: [ s4node1 s4node2 ]",
"pcs resource create s4h_vip_ascs20 IPaddr2 ip=192.168.200.201 --group s4h_ASCS20_group",
"pcs resource create s4h_fs_ascs20 Filesystem device='<NFS_Server>:<s4h_ascs20_nfs_share>' directory=/usr/sap/S4H/ASCS20 fstype=nfs force_unmount=safe --group s4h_ASCS20_group op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40",
"pcs resource create s4h_fs_ascs20_lvm LVM volgrpname='<ascs_volume_group>' exclusive=true --group s4h_ASCS20_group pcs resource create s4h_fs_ascs20 Filesystem device='/dev/mapper/<ascs_logical_volume>' directory=/usr/sap/S4H/ASCS20 fstype=ext4 --group s4h_ASCS20_group",
"pcs resource create s4h_ascs20 SAPInstance InstanceName=\"S4H_ASCS20_s4ascs\" START_PROFILE=/sapmnt/S4H/profile/S4H_ASCS20_s4ascs AUTOMATIC_RECOVER=false meta resource-stickiness=5000 --group s4h_ASCS20_group op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600",
"pcs resource meta s4h_ASCS20_group resource-stickiness=3000",
"pcs resource create s4h_vip_ers29 IPaddr2 ip=192.168.200.202 --group s4h_ERS29_group",
"pcs resource create s4h_fs_ers29 Filesystem device='<NFS_Server>:<s4h_ers29_nfs_share>' directory=/usr/sap/S4H/ERS29 fstype=nfs force_unmount=safe --group s4h_ERS29_group op start interval=0 timeout=60 op stop interval=0 timeout=120 op monitor interval=200 timeout=40",
"pcs resource create s4h_fs_ers29_lvm LVM volgrpname='<ers_volume_group>' exclusive=true --group s4h_ERS29_group pcs resource create s4h_fs_ers29 Filesystem device='/dev/mapper/<ers_logical_volume>' directory=/usr/sap/S4H/ERS29 fstype=ext4 --group s4h_ERS29_group",
"pcs resource create s4h_ers29 SAPInstance InstanceName=\"S4H_ERS29_s4ers\" START_PROFILE=/sapmnt/S4H/profile/S4H_ERS29_s4ers AUTOMATIC_RECOVER=false --group s4h_ERS29_group op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600",
"pcs constraint colocation add s4h_ERS29_group with s4h_ASCS20_group -5000",
"pcs constraint location rh2_ascs20 rule score=2000 runs_ers_RH2 eq 1",
"pcs constraint order start s4h_ASCS20_group then start s4h_ERS29_group symmetrical=false kind=Optional pcs constraint order start s4h_ASCS20_group then stop s4h_ERS29_group symmetrical=false kind=Optional",
"pcs constraint order s4h_fs_sapmnt-clone then s4h_ASCS20_group pcs constraint order s4h_fs_sapmnt-clone then s4h_ERS29_group"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_install_pacemaker_configuring-cost-optimized-sap |
Chapter 4. Technology previews | Chapter 4. Technology previews This part provides a list of all Technology Previews available in Red Hat Satellite 6.15. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization plugin You can provision virtual machines by using the OpenShift Virtualization plugin. Jira:SAT-18663 OVAL / CVE Reporting Support Satellite now includes the ability to scan systems for vulnerabilities using the OVAL standard data feed provided by Red Hat. foreman_openscap contains the API to upload the OVAL content used to trigger the OVAL oscap scans. The results are parsed for CVEs and sent to Satellite which then generates reports of managed hosts and the CVEs that effect them. Note that this feature is no longer available in future releases. Jira:SAT-21011 Kernel execution template You can use the kernel execution (kexec) provisioning template for PXE-less boot methods. Jira:SAT-21012 | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/release_notes/technology-previews |
roxctl CLI | roxctl CLI Red Hat Advanced Cluster Security for Kubernetes 4.5 roxctl CLI Red Hat OpenShift Documentation Team | [
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Linux/roxctlUSD{arch}\"",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"",
"curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Darwin/roxctlUSD{arch}\"",
"xattr -c roxctl",
"chmod +x roxctl",
"echo USDPATH",
"roxctl version",
"curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.5.6/bin/Windows/roxctl.exe",
"roxctl version",
"docker login registry.redhat.io",
"docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6",
"docker run -e ROX_API_TOKEN=USDROX_API_TOKEN -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6 -e USDROX_CENTRAL_ADDRESS <command>",
"docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.5.6 version",
"export ROX_ENDPOINT= <host:port> 1",
"roxctl central whoami",
"UserID: <redacted> User name: <redacted> Roles: APIToken creator, Admin, Analyst, Continuous Integration, Network Graph Viewer, None, Sensor Creator, Vulnerability Management Approver, Vulnerability Management Requester, Vulnerability Manager, Vulnerability Report Creator Access: rw Access rw Administration rw Alert rw CVE rw Cluster rw Compliance rw Deployment rw DeploymentExtension rw Detection rw Image rw Integration rw K8sRole rw K8sRoleBinding rw K8sSubject rw Namespace rw NetworkGraph rw NetworkPolicy rw Node rw Secret rw ServiceAccount rw VulnerabilityManagementApprovals rw VulnerabilityManagementRequests rw WatchedImage rw WorkflowAdministration",
"export ROX_API_TOKEN=<api_token>",
"roxctl central debug dump --token-file <token_file>",
"export ROX_ENDPOINT= <central_hostname:port>",
"roxctl central login",
"Please complete the authorization flow in the browser with an auth provider of your choice. If no browser window opens, please click on the following URL: http://127.0.0.1:xxxxx/login INFO: Received the following after the authorization flow from Central: INFO: Access token: <redacted> 1 INFO: Access token expiration: 2023-04-19 13:58:43 +0000 UTC 2 INFO: Refresh token: <redacted> 3 INFO: Storing these values under USDHOME/.roxctl/login... 4",
"export ROX_API_TOKEN=<api_token>",
"export ROX_ENDPOINT=<address>:<port_number>",
"export ROX_ENDPOINT= <host:port> 1",
"roxctl sensor generate k8s --name <cluster_name> --central \"USDROX_ENDPOINT\"",
"roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1",
"roxctl sensor generate k8s --central wss://stackrox-central.example.com:443",
"./sensor- <cluster_name> /sensor.sh",
"roxctl sensor get-bundle <cluster_name_or_id>",
"roxctl cluster delete --name= <cluster_name>",
"export ROX_ENDPOINT= <host:port> 1",
"roxctl deployment check --file = <yaml_filename> -o csv",
"roxctl deployment check --file= <yaml_filename> -o table --headers POLICY-NAME,SEVERITY --row-jsonpath-expressions=\"{results. .violatedPolicies. .name,results. .violatedPolicies. .severity}\"",
"roxctl deployment check --file=<yaml_filename> \\ 1 --namespace=<cluster_namespace> \\ 2 --cluster=<cluster_name_or_id> \\ 3 --verbose 4",
"roxctl image check --image= <image_name>",
"roxctl image scan --image <image_name>",
"export ROX_ENDPOINT= <host:port> 1",
"kubectl logs -n stackrox <central_pod>",
"oc logs -n stackrox <central_pod>",
"roxctl central debug log",
"roxctl central debug log --level= <log_level> 1",
"roxctl central debug dump",
"(http(s)?://)?<svc>(.<ns>(.svc.cluster.local)?)?(:<portNum>)? 1",
"roxctl netpol generate -h",
"roxctl netpol generate <folder-path> 1",
"roxctl image scan --image= <image_registry> / <image_name> \\ 1 --cluster= <cluster_detail> \\ 2 [flags] 3",
"{ \"Id\": \"sha256:3f439d7d71adb0a0c8e05257c091236ab00c6343bc44388d091450ff58664bf9\", 1 \"name\": { 2 \"registry\": \"image-registry.openshift-image-registry.svc:5000\", 3 \"remote\": \"default/image-stream\", 4 \"tag\": \"latest\", 5 \"fullName\": \"image-registry.openshift-image-registry.svc:5000/default/image-stream:latest\" 6 }, [...]",
"roxctl [command] [flags]",
"roxctl central [command] [flags]",
"roxctl central backup [flags]",
"roxctl central cert [flags]",
"roxctl central login [flags]",
"roxctl central whoami [flags]",
"roxctl central db [flags]",
"roxctl central db restore <file> [flags] 1",
"roxctl central db generate [flags]",
"roxctl central db generate k8s [flags]",
"roxctl central db restore cancel [flags]",
"roxctl central db restore status [flags]",
"roxctl central db generate k8s pvc [flags]",
"roxctl central db generate openshift [flags]",
"roxctl central db generate k8s hostpath [flags]",
"roxctl central db generate openshift pvc [flags]",
"roxctl central db generate openshift hostpath [flags]",
"roxctl central debug [flags]",
"roxctl central debug db [flags]",
"roxctl central debug log [flags]",
"roxctl central debug dump [flags]",
"roxctl central debug db stats [flags]",
"roxctl central debug authz-trace [flags]",
"roxctl central debug db stats reset [flags]",
"roxctl central debug download-diagnostics [flags]",
"roxctl central generate [flags]",
"roxctl central generate k8s [flags]",
"roxctl central generate k8s pvc [flags]",
"roxctl central generate openshift [flags]",
"roxctl central generate interactive [flags]",
"roxctl central generate k8s hostpath [flags]",
"roxctl central generate openshift pvc [flags]",
"roxctl central generate openshift hostpath [flags]",
"roxctl central init-bundles [flag]",
"roxctl central init-bundles list [flags]",
"roxctl central init-bundles revoke <init_bundle_ID or name> [<init_bundle_ID or name> ...] [flags] 1",
"roxctl central init-bundles fetch-ca [flags]",
"roxctl central init-bundles generate <init_bundle_name> [flags] 1",
"roxctl central userpki [flags]",
"roxctl central userpki list [flags]",
"roxctl central userpki create name [flags]",
"roxctl central userpki delete id|name [flags]",
"roxctl cluster [command] [flags]",
"roxctl cluster delete [flags]",
"roxctl collector [command] [flags]",
"roxctl collector support-packages [flags]",
"roxctl collector support-packages upload [flags]",
"roxctl completion [bash|zsh|fish|powershell]",
"roxctl declarative-config [command] [flags]",
"roxctl declarative-config lint [flags]",
"roxctl declarative-config create [flags]",
"roxctl declarative-config create role [flags]",
"roxctl declarative-config create notifier [flags]",
"roxctl declarative-config create access-scope [flags]",
"roxctl declarative-config create auth-provider [flags]",
"roxctl declarative-config create permission-set [flags]",
"roxctl declarative-config create notifier splunk [flags]",
"roxctl declarative-config create notifier generic [flags]",
"roxctl declarative-config create auth-provider iap [flags]",
"roxctl declarative-config create auth-provider oidc [flags]",
"roxctl declarative-config create auth-provider saml [flags]",
"roxctl declarative-config create auth-provider userpki [flags]",
"roxctl declarative-config create auth-provider openshift-auth [flags]",
"roxctl deployment [command] [flags]",
"roxctl deployment check [flags]",
"roxctl helm [command] [flags]",
"roxctl helm output <central_services or secured_cluster_services> [flags] 1",
"roxctl helm derive-local-values --output <path> \\ 1 <central_services> [flags] 2",
"roxctl image [command] [flags]",
"roxctl image scan [flags]",
"roxctl image check [flags]",
"roxctl netpol [command] [flags]",
"roxctl netpol generate <folder_path> [flags] 1",
"roxctl netpol connectivity [flags]",
"roxctl netpol connectivity map <folder_path> [flags] 1",
"roxctl netpol connectivity diff [flags]",
"roxctl scanner [command] [flags]",
"roxctl scanner generate [flags]",
"roxctl scanner upload-db [flags]",
"roxctl scanner download-db [flags]",
"roxctl sensor [command] [flags]",
"roxctl sensor generate [flags]",
"roxctl sensor generate k8s [flags]",
"roxctl sensor generate openshift [flags]",
"roxctl sensor get-bundle <cluster_details> [flags] 1",
"roxctl sensor generate-certs <cluster_details> [flags] 1",
"roxctl version [flags]"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html-single/roxctl_cli/index |
Chapter 16. tags | Chapter 16. tags Optional. An operator-defined list of tags placed on each log by the collector or normalizer. The payload can be a string with whitespace-delimited string tokens or a JSON list of string tokens. Data type text file The path to the log file from which the collector reads this log entry. Normally, this is a path in the /var/log file system of a cluster node. Data type text offset The offset value. Can represent bytes to the start of the log line in the file (zero- or one-based), or log line numbers (zero- or one-based), so long as the values are strictly monotonically increasing in the context of a single log file. The values are allowed to wrap, representing a new version of the log file (rotation). Data type long | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/tags |
Chapter 5. Configuring resources for managed components on OpenShift Container Platform | Chapter 5. Configuring resources for managed components on OpenShift Container Platform You can manually adjust the resources on Red Hat Quay on OpenShift Container Platform for the following components that have running pods: quay clair mirroring clairpostgres postgres This feature allows users to run smaller test clusters, or to request more resources upfront in order to avoid partially degraded Quay pods. Limitations and requests can be set in accordance with Kubernetes resource units . The following components should not be set lower than their minimum requirements. This can cause issues with your deployment and, in some cases, result in failure of the pod's deployment. quay : Minimum of 6 GB, 2vCPUs clair : Recommended of 2 GB memory, 2 vCPUs clairpostgres : Minimum of 200 MB You can configure resource requests on the OpenShift Container Platform UI, or by directly by updating the QuayRegistry YAML. Important The default values set for these components are the suggested values. Setting resource requests too high or too low might lead to inefficient resource utilization, or performance degradation, respectively. 5.1. Configuring resource requests by using the OpenShift Container Platform UI Use the following procedure to configure resources by using the OpenShift Container Platform UI. Procedure On the OpenShift Container Platform developer console, click Operators Installed Operators Red Hat Quay . Click QuayRegistry . Click the name of your registry, for example, example-registry . Click YAML . In the spec.components field, you can override the resource of the quay , clair , mirroring clairpostgres , and postgres resources by setting values for the .overrides.resources.limits and the overrides.resources.requests fields. For example: spec: components: - kind: clair managed: true overrides: resources: limits: cpu: "5" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: "18Gi" # Limiting to 18 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: "700m" # Requesting 700 millicpu or 0.7 CPU memory: "4Gi" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: "800m" # Requesting 800 millicpu or 0.8 CPU memory: "1Gi" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: "4" # Limiting to 4 CPU memory: "10Gi" # Limiting to 10 Gibibytes of memory requests: cpu: "4" # Requesting 4 CPU memory: "10Gi" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: "800m" # Limiting to 800 millicpu or 0.8 CPU memory: "3Gi" # Limiting to 3 Gibibytes of memory requests: {} 1 Setting the limits or requests fields to {} uses the default values for these resources. 2 Leaving the limits or requests field empty puts no limitations on these resources. 5.2. Configuring resource requests by editing the QuayRegistry YAML You can re-configure Red Hat Quay to configure resource requests after you have already deployed a registry. This can be done by editing the QuayRegistry YAML file directly and then re-deploying the registry. Procedure Optional: If you do not have a local copy of the QuayRegistry YAML file, enter the following command to obtain it: USD oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml Open the quayregistry.yaml created from Step 1 of this procedure and make the desired changes. For example: - kind: quay managed: true overrides: resources: limits: {} requests: cpu: "0.7" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: "512Mi" # Requesting 512 Mebibytes of memory Save the changes. Apply the Red Hat Quay registry using the updated configurations by running the following command: USD oc replace -f quayregistry.yaml Example output quayregistry.quay.redhat.com/example-registry replaced | [
"spec: components: - kind: clair managed: true overrides: resources: limits: cpu: \"5\" # Limiting to 5 CPU (equivalent to 5000m or 5000 millicpu) memory: \"18Gi\" # Limiting to 18 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: postgres managed: true overrides: resources: limits: {} 1 requests: cpu: \"700m\" # Requesting 700 millicpu or 0.7 CPU memory: \"4Gi\" # Requesting 4 Gibibytes of memory - kind: mirror managed: true overrides: resources: limits: 2 requests: cpu: \"800m\" # Requesting 800 millicpu or 0.8 CPU memory: \"1Gi\" # Requesting 1 Gibibyte of memory - kind: quay managed: true overrides: resources: limits: cpu: \"4\" # Limiting to 4 CPU memory: \"10Gi\" # Limiting to 10 Gibibytes of memory requests: cpu: \"4\" # Requesting 4 CPU memory: \"10Gi\" # Requesting 10 Gibi of memory - kind: clairpostgres managed: true overrides: resources: limits: cpu: \"800m\" # Limiting to 800 millicpu or 0.8 CPU memory: \"3Gi\" # Limiting to 3 Gibibytes of memory requests: {}",
"oc get quayregistry <registry_name> -n <namespace> -o yaml > quayregistry.yaml",
"- kind: quay managed: true overrides: resources: limits: {} requests: cpu: \"0.7\" # Requesting 0.7 CPU (equivalent to 500m or 500 millicpu) memory: \"512Mi\" # Requesting 512 Mebibytes of memory",
"oc replace -f quayregistry.yaml",
"quayregistry.quay.redhat.com/example-registry replaced"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-resources-managed-components |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_event-driven_ansible_guide/providing-feedback |
9.3.5. Tuning vCPU Pinning with virsh | 9.3.5. Tuning vCPU Pinning with virsh Important These are example commands only. You will need to substitute values according to your environment. The following example virsh command will pin the vcpu thread (rhel6u4) which has an ID of 1 to the physical CPU 2: You can also obtain the current vcpu pinning configuration with the virsh command. For example: | [
"% virsh vcpupin rhel6u4 1 2",
"% virsh vcpupin rhel6u4"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-numa-numa_and_libvirt-vcpu_pinning_with_virsh |
3.6. Displaying the Full Cluster Configuration | 3.6. Displaying the Full Cluster Configuration Use the following command to display the full current cluster configuration. | [
"pcs config"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-pcsfullconfig-haar |
Chapter 8. Security | Chapter 8. Security As a storage administrator, securing the storage cluster environment is important. Red Hat Ceph Storage provides encryption and key management to secure the Ceph Object Gateway access point. Prerequisites A healthy running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. 8.1. Server-Side Encryption (SSE) The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programming interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Currently, none of the Server-Side Encryption (SSE) modes have implemented support for CopyObject . It is currently being developed [BZ#2149450 ]. Important Server-side encryption is not compatible with multi-site replication due to a known issue. This issue will be resolved in a future release. See Known issues- Mult-site Object Gateway for more details. Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators can disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, using the ceph config set client.rgw command, and then restarting the Ceph Object Gateway instance. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. There are three options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. SSE-S3 When using SSE-S3, the keys are stored in vault, but they are automatically created and deleted by Ceph and retrieved as required to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the SSE-S3 behavior in the S3 API according to the Amazon SSE-S3 specification. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 8.1.1. Setting the default encryption for an existing S3 bucket As a storage administrator, you can set the default encryption for an existing Amazon S3 bucket so that all objects are encrypted when they are stored in a bucket. You can use Bucket Encryption APIs to support server-side encryption with Amazon S3-managed keys (SSE-S3) or Amazon KMS customer master keys (SSE-KMS). Note SSE-KMS is supported only from Red Hat Ceph Storage 5.x, not for Red Hat Ceph Storage 4.x. You can manage default encryption for an existing Amazon S3 bucket using the PutBucketEncryption API. All files uploaded to this bucket will have this encryption by defining the default encryption at the bucket level. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. An S3 bucket created. An S3 user created with user access. Access to a Ceph Object Gateway client with the AWS CLI package installed. Procedure Create a JSON file for the encryption configuration: Example Add the encryption configuration rules to the file: Example Set the default encryption for the bucket: Syntax Example Verification Retrieve the bucket encryption configuration for the bucket: Syntax Example Note If the bucket does not have a default encryption configuration, the get-bucket-encryption command returns ServerSideEncryptionConfigurationNotFoundError . 8.1.2. Deleting the default bucket encryption You can delete the default bucket encryption for a specified bucket using the s3api delete-bucket-encryption command. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway. An S3 bucket created. An S3 user created with user access. Access to a Ceph Object Gateway client with the AWS CLI package installed. Procedure Delete a bucket encryption configuration: Syntax Example Verification Retrieve the bucket encryption configuration for the bucket: Syntax Example 8.2. Server-side encryption requests In a production environment, clients often contact the Ceph Object Gateway through a proxy. This proxy is referred to as a load balancer because it connects to multiple Ceph Object Gateways. When the client sends requests to the Ceph Object Gateway, the load balancer routes those requests to the multiple Ceph Object Gateways, thus distributing the workload. In this type of configuration, it is possible that SSL terminations occur both at a load balancer and between the load balancer and the multiple Ceph Object Gateways. Communication occurs using HTTP only. To set up the Ceph Object Gateways to accept the server-side encryption requests, see Configuring server-side encryption . 8.3. Configuring server-side encryption You can set up server-side encryption to send requests to the Ceph Object Gateway using HTTP, in cases where it might not be possible to send encrypted requests over SSL. This procedure uses HAProxy as proxy and load balancer. Prerequisites Root-level access to all nodes in the storage cluster. A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Installation of the HAProxy software. Procedure Edit the haproxy.cfg file: Example Comment out the lines that allow access to the http front end and add instructions to direct HAProxy to use the https front end instead: Example Set the rgw_trust_forwarded_https option to true : Example Enable and start HAProxy: Additional Resources See the High availability service section in the Red Hat Ceph Storage Object Gateway Guide for additional details. See the Red Hat Ceph Storage installation chapter in the Red Hat Ceph Storage Installation Guide for additional details. 8.4. The HashiCorp Vault As a storage administrator, you can securely store keys, passwords, and certificates in the HashiCorp Vault for use with the Ceph Object Gateway. The HashiCorp Vault provides a secure key management service for server-side encryption used by the Ceph Object Gateway. The basic workflow: The client requests the creation of a secret key from the Vault based on an object's key ID. The client uploads an object with the object's key ID to the Ceph Object Gateway. The Ceph Object Gateway then requests the newly created secret key from the Vault. The Vault replies to the request by returning the secret key to the Ceph Object Gateway. Now the Ceph Object Gateway can encrypt the object using the new secret key. After encryption is done the object is then stored on the Ceph OSD. Important Red Hat works with our technology partners to provide this documentation as a service to our customers. However, Red Hat does not provide support for this product. If you need technical assistance for this product, then contact Hashicorp for support. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Installation of the HashiCorp Vault software. 8.4.1. Secret engines for Vault The HashiCorp Vault provides several secret engines to generate, store, or encrypt data. The application programming interface (API) sends data calls to the secret engine asking for action on that data, and the secret engine returns a result of that action request. The Ceph Object Gateway supports two of the HashiCorp Vault secret engines: Key/Value version 2 Transit Important The secret engines can be configured at any time but the engines are not supported simultaneously. Key/Value version 2 The Key/Value secret engine stores random secrets within the Vault, on disk. With version 2 of the kv engine, a key can have a configurable number of versions. The default number of versions is 10. Deleting a version does not delete the underlying data, but marks the data as deleted, allowing deleted versions to be undeleted. You can use the API endpoint or the destroy command to permanently remove a version's data. To delete all versions and metadata for a key, you can use the metadata command or the API endpoint. The key names must be strings, and the engine will convert non-string values into strings when using the command line interface. To preserve non-string values, provide a JSON file or use the HTTP application programming interface (API). Note For access control list (ACL) policies, the Key/Value secret engine recognizes the distinctions between the create and update capabilities. Transit The Transit secret engine performs cryptographic functions on in-transit data. The Transit secret engine can generate hashes, can be a source of random bytes, and can also sign and verify data. The Vault does not store data when using the Transit secret engine. The Transit secret engine supports key derivation, by allowing the same key to be used for multiple purposes. Also, the transit secret engine supports key versioning. The Transit secret engine supports these key types: aes128-gcm96 AES-GCM with a 128-bit AES key and a 96-bit nonce; supports encryption, decryption, key derivation, and convergent encryption aes256-gcm96 AES-GCM with a 256-bit AES key and a 96-bit nonce; supports encryption, decryption, key derivation, and convergent encryption (default) chacha20-poly1305 ChaCha20-Poly1305 with a 256-bit key; supports encryption, decryption, key derivation, and convergent encryption ed25519 Ed25519; supports signing, signature verification, and key derivation ecdsa-p256 ECDSA using curve P-256; supports signing and signature verification ecdsa-p384 ECDSA using curve P-384; supports signing and signature verification ecdsa-p521 ECDSA using curve P-521; supports signing and signature verification rsa-2048 2048-bit RSA key; supports encryption, decryption, signing, and signature verification rsa-3072 3072-bit RSA key; supports encryption, decryption, signing, and signature verification rsa-4096 4096-bit RSA key; supports encryption, decryption, signing, and signature verification Additional Resources See the KV Secrets Engine documentation on Vault's project site for more information. See the Transit Secrets Engine documentation on Vault's project site for more information. 8.4.2. Authentication for Vault The HashiCorp Vault supports several types of authentication mechanisms. The Ceph Object Gateway currently supports the Vault agent method. The Ceph Object Gateway uses the rgw_crypt_vault_auth , and rgw_crypt_vault_addr options to configure the use of the HashiCorp Vault. Important Red Hat supports the usage of Vault agent as the authentication method for containers and the usage of token authentication is not supported on containers. Vault Agent The Vault agent is a daemon that runs on a client node and provides client-side caching, along with token renewal. The Vault agent typically runs on the Ceph Object Gateway node. Run the Vault agent and refresh the token file. When the Vault agent is used in this mode, you can use file system permissions to restrict who has access to the usage of tokens. Also, the Vault agent can act as a proxy server, that is, Vault will add a token when required and add it to the requests passed to it before forwarding them to the actual server. The Vault agent can still handle token renewal just as it would when storing a token in the Filesystem. It is required to secure the network that Ceph Object Gateways uses to connect with the Vault agent, for example, the Vault agent listens to only the localhost. Additional Resources See the Vault Agent documentation on Vault's project site for more information. 8.4.3. Namespaces for Vault Using HashiCorp Vault as an enterprise service provides centralized management for isolated namespaces that teams within an organization can use. These isolated namespace environments are known as tenants , and teams within an organization can utilize these tenants to isolate their policies, secrets, and identities from other teams. The namespace features of Vault help support secure multi-tenancy from within a single infrastructure. Additional Resources See the Vault Enterprise Namespaces documentation on Vault's project site for more information. 8.4.4. Transit engine compatibility support There is compatibility support for the versions of Ceph which used the Transit engine as a simple key store. You can use the compat option in the Transit engine to configure the compatibility support. You can disable support with the following command: Example Note This is the default for future versions and you can use the current version for new installations. The normal default with the current version is: Example This enables the new engine for newly created objects and still allows the old engine to be used for the old objects. To access old and new objects, the Vault token must have both the old and new transit policies. You can force use only the old engine with the following command: Example This mode is selected by default if the Vault ends in export/encryption-key . Important After configuring the client.rgw options, you need to restart the Ceph Object Gateway daemons for the new values to take effect. Additional Resources See the Vault Agent documentation on Vault's project site for more information. 8.4.5. Creating token policies for Vault A token policy specifies the powers that all Vault tokens have. One token can have multiple policies. You should use the required policy for the configuration. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the HashiCorp Vault software. Root-level access to the HashiCorp Vault node. Procedure Create a token policy: For the Key/Value secret engine: Example For the Transit engine: Example Note If you have used the Transit secret engine on an older version of Ceph, the token policy is: Example If you are using both SSE-KMS and SSE-S3, you should point each to separate containers. You could either use separate Vault instances or separately mount transit instances or different branches under a common transit point. If you are not using separate Vault instances, you can point SSE-KMS or SSE-S3 to serparate containers using rgw_crypt_vault_prefix and rgw_crypt_sse_s3_vault_prefix . When granting Vault permissions to SSE-KMS bucket owners, you should not give them permission to SSE-S3 keys; only Ceph should have access to the SSE-S3 keys. 8.4.6. Configuring the Ceph Object Gateway to use SSE-KMS with Vault To configure the Ceph Object Gateway to use the HashiCorp Vault with SSE-KMS for key management, it must be set as the encryption key store. Currently, the Ceph Object Gateway supports two different secret engines, and two different authentication methods. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Root-level access to a Ceph Object Gateway node. Procedure Use the ceph config set client.rgw OPTION VALUE command to enable Vault as the encryption key store: Syntax Add the following options and values: Syntax Customize the policy as per the use case. Get the role-id: Syntax Get the secret-id: Syntax Create the configuration for the Vault agent: Example Use systemctl to run the persistent daemon: Example A token file is populated with a valid token when the Vault agent runs. Select a Vault secret engine, either Key/Value or Transit. If using Key/Value , then add the following line: Example If using Transit , then add the following line: Example Use the ceph config set client.rgw OPTION VALUE command to set the Vault namespace to retrieve the encryption keys: Example Restrict where the Ceph Object Gateway retrieves the encryption keys from the Vault by setting a path prefix: Example For exportable Transit keys, set the prefix path as follows: Example Assuming the domain name of the Vault server is vault-server , the Ceph Object Gateway will fetch encrypted transit keys from the following URL: Example Restart the Ceph Object Gateway daemons. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Additional Resources See the Secret engines for Vault section of the Red Hat Ceph Storage Object Gateway Guide for more details. See the Authentication for Vault section of the Red Hat Ceph Storage Object Gateway Guide for more details. 8.4.7. Configuring the Ceph Object Gateway to use SSE-S3 with Vault To configure the Ceph Object Gateway to use the HashiCorp Vault with SSE-S3 for key management, it must be set as the encryption key store. Currently, the Ceph Object Gateway only uses agent authentication method. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Root-level access to a Ceph Object Gateway node. Procedure Log into the Cephadm shell Example Enable Vault as the secrets engine to retrieve SSE-S3 encryption keys: Syntax To set the authentication method to use with SSE-S3 and Vault, configure the following settings: Syntax Example Customize the policy as per your use case to set up a Vault agent. Get the role-id: Syntax Get the secret-id: Syntax Create the configuration for the Vault agent: Example Use systemctl to run the persistent daemon: Example A token file is populated with a valid token when the Vault agent runs. Set the Vault secret engine to use to retrieve encryption keys, either Key/Value or Transit. If using Key/Value , set the following: Example If using Transit , set the following: Example Optional: Configure the Ceph Object Gateway to access Vault within a particular namespace to retrieve the encryption keys: Example Note Vault namespaces allow teams to operate within isolated environments known as tenants. The Vault namespaces feature is only available in the Vault Enterprise version. Optional: Restrict access to a particular subset of the Vault secret space by setting a URL path prefix, where the Ceph Object Gateway retrieves the encryption keys from: If using Key/Value , set the following: Example If using Transit , set the following: Example Assuming the domain name of the Vault server is vaultserver , the Ceph Object Gateway will fetch encrypted transit keys from the following URL: Example Optional: To use custom SSL certification to authenticate with Vault, configure the following settings: Syntax Example Restart the Ceph Object Gateway daemons. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Additional Resources See the Secret engines for Vault section of the Red Hat Ceph Storage Object Gateway Guide for more details. See the Authentication for Vault section of the Red Hat Ceph Storage Object Gateway Guide for more details. 8.4.8. Creating a key using the kv engine Configure the HashiCorp Vault Key/Value secret engine ( kv ) so you can create a key for use with the Ceph Object Gateway. Secrets are stored as key-value pairs in the kv secret engine. Important Keys for server-side encryption must be 256-bits long and encoded using base64 . Prerequisites A running Red Hat Ceph Storage cluster. Installation of the HashiCorp Vault software. Root-level access to the HashiCorp Vault node. Procedure Enable the Key/Value version 2 secret engine: Example Create a new key: Syntax Example 8.4.9. Creating a key using the transit engine Configure the HashiCorp Vault Transit secret engine ( transit ) so you can create a key for use with the Ceph Object Gateway. Creating keys with the Transit secret engine must be exportable in order to be used for server-side encryption with the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the HashiCorp Vault software. Root-level access to the HashiCorp Vault node. Procedure Enable the Transit secret engine: Create a new exportable key: Syntax Example Note By default the above command creates a aes256-gcm96 type key. Enable key rotation: Syntax Example Specify the duration for key rotation: Syntax Example In this example, 30d specifies that the key is rotated after 30 days. To specify the key rotation duration in hours, use auto_rotate_period=1h . 1h specifies that the key rotates every 1 hour. Verify key rotation is successful by ensuring that the latest_version value has incremented: Syntax Example Verify the creation of the key: Syntax Example Note Providing the full key path, including the key version, is required. 8.4.10. Uploading an object using AWS and the Vault When uploading an object to the Ceph Object Gateway, the Ceph Object Gateway will fetch the key from the Vault, and then encrypt and store the object in a bucket. When a request is made to download the object, the Ceph Object Gateway will automatically retrieve the corresponding key from the Vault and decrypt the object. To upload an object, the Ceph object Gateway fetches the key from the Vault and then encrypts the object and stores it in the bucket. The Ceph Object Gateway retrieves the corresponding key from the Vault and decrypts the object when there is a request to download the object. Note The URL is constructed using the base address, set by the rgw_crypt_vault_addr option, and the path prefix, set by the rgw_crypt_vault_prefix option. Prerequisites A running Red Hat Ceph Storage cluster. Installation of the Ceph Object Gateway software. Installation of the HashiCorp Vault software. Access to a Ceph Object Gateway client node. Access to Amazon Web Services (AWS). Procedure Upload an object using the AWS command line client and provide the Secure Side Encryption (SSE) key ID in the request: For the Key/Value secret engine: Example (with SSE-KMS) Example (with SSE-S3) Note In the example, the Ceph Object Gateway would fetch the secret from http://vault-server:8200/v1/secret/data/myproject/mybucketkey For the Transit engine: Example (with SSE-KMS) Example (with SSE-S3) Note In the example, the Ceph Object Gateway would fetch the secret from http://vaultserver:8200/v1/transit/mybucketkey Additional Resources See the Install Vault documentation on Vault's project site for more information. 8.5. The Ceph Object Gateway and multi-factor authentication As a storage administrator, you can manage time-based one time password (TOTP) tokens for Ceph Object Gateway users. 8.5.1. Multi-factor authentication When a bucket is configured for object versioning, a developer can optionally configure the bucket to require multi-factor authentication (MFA) for delete requests. Using MFA, a time-based one time password (TOTP) token is passed as a key to the x-amz-mfa header. The tokens are generated with virtual MFA devices like Google Authenticator, or a hardware MFA device like those provided by Gemalto. Use radosgw-admin to assign time-based one time password tokens to a user. You must set a secret seed and a serial ID. You can also use radosgw-admin to list, remove, and resynchronize tokens. Important In a multi-site environment it is advisable to use different tokens for different zones, because, while MFA IDs are set on the user's metadata, the actual MFA one time password configuration resides on the local zone's OSDs. Table 8.1. Terminology Term Description TOTP Time-based One Time Password. Token serial A string that represents the ID of a TOTP token. Token seed The secret seed that is used to calculate the TOTP. It can be hexadecimal or base32. TOTP seconds The time resolution used for TOTP generation. TOTP window The number of TOTP tokens that are checked before and after the current token when validating tokens. TOTP pin The valid value of a TOTP token at a certain time. 8.5.2. Creating a seed for multi-factor authentication To set up multi-factor authentication (MFA), you must create a secret seed for use by the one-time password generator and the back-end MFA system. Prerequisites A Linux system. Access to the command line shell. Procedure Generate a 30 character seed from the urandom Linux device file and store it in the shell variable SEED : Example Print the seed by running echo on the SEED variable: Example Configure the one-time password generator and the back-end MFA system to use the same seed. Additional Resources For more information, see the solution Unable to create RGW MFA token for bucket . For more information, see The Ceph Object Gateway and multi-factor authentication . 8.5.3. Creating a new multi-factor authentication TOTP token Create a new multi-factor authentication (MFA) time-based one time password (TOTP) token. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. A secret seed for the one-time password generator and Ceph Object Gateway MFA was generated. Procedure Create a new MFA TOTP token: Syntax Set USERID to the user name to set up MFA on, set SERIAL to a string that represents the ID for the TOTP token, and set SEED to a hexadecimal or base32 value that is used to calculate the TOTP. The following settings are optional: Set the SEED_TYPE to hex or base32 , set TOTP_SECONDS to the timeout in seconds, or set TOTP_WINDOW to the number of TOTP tokens to check before and after the current token when validating tokens. Example Additional Resources For more information, see Creating a seed for multi-factor authentication . For more information, See Resynchronizing a multi-factor authentication token . 8.5.4. Test a multi-factor authentication TOTP token Test a multi-factor authentication (MFA) time-based one time password (TOTP) token. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Procedure Test the TOTP token PIN to verify that TOTP functions correctly: Syntax Set USERID to the user name MFA is set up on, set SERIAL to the string that represents the ID for the TOTP token, and set PIN to the latest PIN from the one-time password generator. Example If this is the first time you have tested the PIN, it may fail. If it fails, resynchronize the token. See Resynchronizing a multi-factor authentication token in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Additional Resources For more information, see Creating a seed for multi-factor authentication . For more information, see Resynchronizing a multi-factor authentication token . 8.5.5. Resynchronizing a multi-factor authentication TOTP token Resynchronize a multi-factor authentication (MFA) time-based one time password token. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Procedure Resynchronize a multi-factor authentication TOTP token in case of time skew or failed checks. This requires passing in two consecutive pins: the pin, and the current pin. Syntax Set USERID to the user name MFA is set up on, set SERIAL to the string that represents the ID for the TOTP token, set PREVIOUS_PIN to the user's PIN, and set CURRENT_PIN to the user's current PIN. Example Verify the token was successfully resynchronized by testing a new PIN: Syntax Set USERID to the user name MFA is set up on, set SERIAL to the string that represents the ID for the TOTP token, and set PIN to the user's PIN. Example Additional Resources For more information, see Creating a new multi-factor authentication TOTP token . 8.5.6. Listing multi-factor authentication TOTP tokens List all multi-factor authentication (MFA) time-based one time password (TOTP) tokens that a particular user has. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Procedure List MFA TOTP tokens: Syntax Set USERID to the user name MFA is set up on. Example Additional Resources For more information, see Creating a new multi-factor authentication TOTP token . 8.5.7. Display a multi-factor authentication TOTP token Display a specific multi-factor authentication (MFA) time-based one time password (TOTP) token by specifying its serial. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Procedure Show the MFA TOTP token: Syntax Set USERID to the user name MFA is set up on and set SERIAL to the string that represents the ID for the TOTP token. Additional Resources For more information, see Creating a new multi-factor authentication TOTP token . 8.5.8. Deleting a multi-factor authentication TOTP token Delete a multi-factor authentication (MFA) time-based one time password (TOTP) token. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on a Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Procedure Delete an MFA TOTP token: Syntax Set USERID to the user name MFA is set up on and set SERIAL to the string that represents the ID for the TOTP token. Example Verify the MFA TOTP token was deleted: Syntax Set USERID to the user name MFA is set up on and set SERIAL to the string that represents the ID for the TOTP token. Example Additional Resources For more information, see The Ceph Object Gateway and multi-factor authentication . 8.5.9. Deleting an MFA-enabled versioned object Delete an MFA-enabled versioned object. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway is installed. You have root access on the Ceph Monitor node. An MFA TOTP token was created using radosgw-admin mfa create . Root user must be authenticated with an MFA-Object to delete a bucket that is versioning enabled. MFA-Object that you want to delete is in a bucket that is versioning enabled. Procedure Delete an MFA-enabled versioned object: Syntax Set USERID to the user name MFA is set up on and set SERIAL to the string that represents the ID for the TOTP token. Example Verify the MFA-enabled versioned object was deleted: Syntax | [
"[user@client ~]USD vi bucket-encryption.json",
"{ \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] }",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api put-bucket-encryption --bucket pass:q[_BUCKET_NAME_] --server-side-encryption-configuration pass:q[_file://PATH_TO_BUCKET_ENCRYPTION_CONFIGURATION_FILE/BUCKET_ENCRYPTION_CONFIGURATION_FILE.json_]",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api put-bucket-encryption --bucket testbucket --server-side-encryption-configuration file://bucket-encryption.json",
"aws --endpoint-url=pass:q[_RADOSGW_ENDPOINT_URL_]:pass:q[_PORT_] s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --profile ceph --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket { \"ServerSideEncryptionConfiguration\": { \"Rules\": [ { \"ApplyServerSideEncryptionByDefault\": { \"SSEAlgorithm\": \"AES256\" } } ] } }",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api delete-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint-url=http://host01:80 s3api delete-bucket-encryption --bucket testbucket",
"aws --endpoint-url= RADOSGW_ENDPOINT_URL : PORT s3api get-bucket-encryption --bucket BUCKET_NAME",
"[user@client ~]USD aws --endpoint=http://host01:80 s3api get-bucket-encryption --bucket testbucket An error occurred (ServerSideEncryptionConfigurationNotFoundError) when calling the GetBucketEncryption operation: The server side encryption configuration was not found",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"frontend http_web *:80 mode http default_backend rgw frontend rgw\\u00ad-https bind *:443 ssl crt /etc/ssl/private/example.com.pem http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto https here we set the incoming HTTPS port on the load balancer (eg : 443) http-request set-header X-Forwarded-Port 443 default_backend rgw backend rgw balance roundrobin mode http server rgw1 10.0.0.71:8080 check server rgw2 10.0.0.80:8080 check",
"ceph config set client.rgw rgw_trust_forwarded_https true",
"systemctl enable haproxy systemctl start haproxy",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=0",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=1",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit compat=2",
"vault policy write rgw-kv-policy -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF",
"vault policy write rgw-transit-policy -<<EOF path \"transit/keys/*\" { capabilities = [ \"create\", \"update\" ] denied_parameters = {\"exportable\" = [], \"allow_plaintext_backup\" = [] } } path \"transit/keys/*\" { capabilities = [\"read\", \"delete\"] } path \"transit/keys/\" { capabilities = [\"list\"] } path \"transit/keys/+/rotate\" { capabilities = [ \"update\" ] } path \"transit/*\" { capabilities = [ \"update\" ] } EOF",
"vault policy write old-rgw-transit-policy -<<EOF path \"transit/export/encryption-key/*\" { capabilities = [\"read\"] } EOF",
"ceph config set client.rgw rgw_crypt_s3_kms_backend vault",
"ceph config set client.rgw rgw_crypt_vault_auth agent ceph config set client.rgw rgw_crypt_vault_addr http:// VAULT_SERVER :8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.role_id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .data.secret_id > PATH_TO_FILE",
"pid_file = \"/run/kv-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/root/vault_configs/kv-agent-role-id\" secret_id_file_path =\"/root/vault_configs/kv-agent-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"http://10.8.128.9:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_vault_namespace testnamespace1",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_vault_prefix /v1/transit/export/encryption-key",
"http://vault-server:8200/v1/transit/export/encryption-key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"cephadm shell",
"ceph config set client.rgw rgw_crypt_sse_s3_backend vault",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http:// VAULT_AGENT : VAULT_AGENT_PORT",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_auth agent ceph config set client.rgw rgw_crypt_sse_s3_vault_addr http://vaultagent:8100",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-role-id > PATH_TO_FILE",
"vault read auth/approle/role/rgw-ap/role-id -format=json | \\ jq -r .rgw-ap-secret-id > PATH_TO_FILE",
"pid_file = \"/run/rgw-vault-agent-pid\" auto_auth { method \"AppRole\" { mount_path = \"auth/approle\" config = { role_id_file_path =\"/usr/local/etc/vault/.rgw-ap-role-id\" secret_id_file_path =\"/usr/local/etc/vault/.rgw-ap-secret-id\" remove_secret_id_file_after_reading =\"false\" } } } cache { use_auto_auth_token = true } listener \"tcp\" { address = \"127.0.0.1:8100\" tls_disable = true } vault { address = \"https://vaultserver:8200\" }",
"/usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine kv",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_secret_engine transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_namespace company/testnamespace1",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/secret/data",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_prefix /v1/transit",
"http://vaultserver:8200/v1/transit",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert PATH_TO_CA_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert PATH_TO_CLIENT_CERTIFICATE ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey PATH_TO_PRIVATE_KEY",
"ceph config set client.rgw rgw_crypt_sse_s3_vault_verify_ssl true ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_cacert /etc/ceph/vault.ca ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientcert /etc/ceph/vault.crt ceph config set client.rgw rgw_crypt_sse_s3_vault_ssl_clientkey /etc/ceph/vault.key",
"systemctl restart ceph- CLUSTER_ID@SERVICE_TYPE . ID .service",
"systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service",
"ceph orch restart SERVICE_TYPE",
"ceph orch restart rgw",
"vault secrets enable -path secret kv-v2",
"vault kv put secret/ PROJECT_NAME / BUCKET_NAME key=USD(openssl rand -base64 32)",
"vault kv put secret/myproject/mybucketkey key=USD(openssl rand -base64 32) ====== Metadata ====== Key Value --- ----- created_time 2020-02-21T17:01:09.095824999Z deletion_time n/a destroyed false version 1",
"vault secrets enable transit",
"vault write -f transit/keys/ BUCKET_NAME exportable=true",
"vault write -f transit/keys/mybucketkey exportable=true",
"vault write -f transit/keys/BUCKET_NAME/rotate exportable=true",
"vault write -f transit/keys/mybucketkey/rotate exportable=true",
"vault write -f transit/keys/BUCKET_NAME/config auto_rotate_period=DURATION",
"vault write -f transit/keys/mybucketkey/config auto_rotate_period=30d",
"vault read transit/export/encryption-key/BUCKET_NAME",
"vault read transit/export/encryption-key/mybucketkey",
"vault read transit/export/encryption-key/ BUCKET_NAME / VERSION_NUMBER",
"vault read transit/export/encryption-key/mybucketkey/1 Key Value --- ----- keys map[1:-gbTI9lNpqv/V/2lDcmH2Nq1xKn6FPDWarCmFM2aNsQ=] name mybucketkey type aes256-gcm96",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@client ~]USD aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey",
"[user@client ~]USD aws s3api --endpoint http://rgw_host:8080 put-object --bucket my-bucket --key obj1 --body test_file_to_upload --server-side-encryption AES256",
"[user@host01 ~]USD SEED=USD(head -10 /dev/urandom | sha512sum | cut -b 1-30)",
"[user@host01 ~]USD echo USDSEED 492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa create --uid= USERID --totp-serial= SERIAL --totp-seed= SEED --totp-seed-type= SEED_TYPE --totp-seconds= TOTP_SECONDS --totp-window= TOTP_WINDOW",
"radosgw-admin mfa create --uid=johndoe --totp-serial=MFAtest --totp-seed=492dedb20cf51d1405ef6a1316017e",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa resync --uid= USERID --totp-serial= SERIAL --totp-pin= PREVIOUS_PIN --totp=pin= CURRENT_PIN",
"radosgw-admin mfa resync --uid=johndoe --totp-serial=MFAtest --totp-pin=802021 --totp-pin=439996",
"radosgw-admin mfa check --uid= USERID --totp-serial= SERIAL --totp-pin= PIN",
"radosgw-admin mfa check --uid=johndoe --totp-serial=MFAtest --totp-pin=870305 ok",
"radosgw-admin mfa list --uid= USERID",
"radosgw-admin mfa list --uid=johndoe { \"entries\": [ { \"type\": 2, \"id\": \"MFAtest\", \"seed\": \"492dedb20cf51d1405ef6a1316017e\", \"seed_type\": \"hex\", \"time_ofs\": 0, \"step_size\": 30, \"window\": 2 } ] }",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa remove --uid=johndoe --totp-serial=MFAtest",
"radosgw-admin mfa get --uid= USERID --totp-serial= SERIAL",
"radosgw-admin mfa get --uid=johndoe --totp-serial=MFAtest MFA serial id not found",
"radosgw-admin object rm --bucket <bucket_name> --object <object_name> --object-version <object_version> --totp-pin <totp_token>",
"[radosgw-admin object rm --bucket test-mfa --object obj1 --object-version 4SdKdSmpVAarLLdsFukjUVEwr-2oWfC --totp-pin 562813",
"radosgw-admin bucket list --bucket <bucket_name>"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/security |
Chapter 3. Tuning the Number of Locks | Chapter 3. Tuning the Number of Locks Lock mechanisms in Directory Server control how many copies of Directory Server processes can run at the same time. For example, during an import job, Directory Server sets a lock in the /run/lock/dirsrv/slapd- instance_name /imports/ directory to prevent the ns-slapd (Directory Server) process, another import, or export operations from running. If the server runs out of available locks, the following error is logged in the /var/log/dirsrv/slapd- instance_name /errors file: If error messages indicate that the lock table is out of available locks, double the number of locks. If the problem persists, double the value again. 3.1. Manually Monitoring the Number of Locks To monitor the number of locks using the command line, enter: For details about the monitoring attributes, see the descriptions in the Directory Server Configuration, Command, and File Reference . | [
"libdb: Lock table is out of available locks",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x -s sub -b \"cn=database,cn=monitor,cn=ldbm database,cn=plugins,cn=config\" nsslapd-db-current-locks nsslapd-db-max-locks"
]
| https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/performance_tuning_guide/locks |
Chapter 8. Quota management architecture | Chapter 8. Quota management architecture With the quota management feature enabled, individual blob sizes are summed at the repository and namespace level. For example, if two tags in the same repository reference the same blob, the size of that blob is only counted once towards the repository total. Additionally, manifest list totals are counted toward the repository total. Important Because manifest list totals are counted toward the repository total, the total quota consumed when upgrading from a version of Red Hat Quay might be reportedly differently in Red Hat Quay 3.9. In some cases, the new total might go over a repository's previously-set limit. Red Hat Quay administrators might have to adjust the allotted quota of a repository to account for these changes. The quota management feature works by calculating the size of existing repositories and namespace with a backfill worker, and then adding or subtracting from the total for every image that is pushed or garbage collected afterwords. Additionally, the subtraction from the total happens when the manifest is garbage collected. Note Because subtraction occurs from the total when the manifest is garbage collected, there is a delay in the size calculation until it is able to be garbage collected. For more information about garbage collection, see Red Hat Quay garbage collection . The following database tables hold the quota repository size, quota namespace size, and quota registry size, in bytes, of a Red Hat Quay repository within an organization: QuotaRepositorySize QuotaNameSpaceSize QuotaRegistrySize The organization size is calculated by the backfill worker to ensure that it is not duplicated. When an image push is initialized, the user's organization storage is validated to check if it is beyond the configured quota limits. If an image push exceeds defined quota limitations, a soft or hard check occurs: For a soft check, users are notified. For a hard check, the push is stopped. If storage consumption is within configured quota limits, the push is allowed to proceed. Image manifest deletion follows a similar flow, whereby the links between associated image tags and the manifest are deleted. Additionally, after the image manifest is deleted, the repository size is recalculated and updated in the QuotaRepositorySize , QuotaNameSpaceSize , and QuotaRegistrySize tables. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/quota-management-arch |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/release_notes_for_amq_interconnect_1.10/making-open-source-more-inclusive |
Migrating the Networking Service to the ML2/OVN Mechanism Driver | Migrating the Networking Service to the ML2/OVN Mechanism Driver Red Hat OpenStack Platform 16.2 Migrate the Networking service (neutron) from the ML2/OVS mechanism driver to the ML2/OVN mechanism driver OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/migrating_the_networking_service_to_the_ml2ovn_mechanism_driver/index |
Chapter 5. Managing user-owned OAuth access tokens | Chapter 5. Managing user-owned OAuth access tokens Users can review their own OAuth access tokens and delete any that are no longer needed. 5.1. Listing user-owned OAuth access tokens You can list your user-owned OAuth access tokens. Token names are not sensitive and cannot be used to log in. Procedure List all user-owned OAuth access tokens: USD oc get useroauthaccesstokens Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full List user-owned OAuth access tokens for a particular OAuth client: USD oc get useroauthaccesstokens --field-selector=clientName="console" Example output NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full 5.2. Viewing the details of a user-owned OAuth access token You can view the details of a user-owned OAuth access token. Procedure Describe the details of a user-owned OAuth access token: USD oc describe useroauthaccesstokens <token_name> Example output Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none> 1 The token name, which is the sha256 hash of the token. Token names are not sensitive and cannot be used to log in. 2 The client name, which describes where the token originated from. 3 The value in seconds from the creation time before this token expires. 4 If there is a token inactivity timeout set for the OAuth server, this is the value in seconds from the creation time before this token can no longer be used. 5 The scopes for this token. 6 The user name associated with this token. 5.3. Deleting user-owned OAuth access tokens The oc logout command only invalidates the OAuth token for the active session. You can use the following procedure to delete any user-owned OAuth tokens that are no longer needed. Deleting an OAuth access token logs out the user from all sessions that use the token. Procedure Delete the user-owned OAuth access token: USD oc delete useroauthaccesstokens <token_name> Example output useroauthaccesstoken.oauth.openshift.io "<token_name>" deleted 5.4. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Container Platform by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml | [
"oc get useroauthaccesstokens",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token1> openshift-challenging-client 2021-01-11T19:25:35Z 2021-01-12 19:25:35 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/implicit user:full <token2> openshift-browser-client 2021-01-11T19:27:06Z 2021-01-12 19:27:06 +0000 UTC https://oauth-openshift.apps.example.com/oauth/token/display user:full <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc get useroauthaccesstokens --field-selector=clientName=\"console\"",
"NAME CLIENT NAME CREATED EXPIRES REDIRECT URI SCOPES <token3> console 2021-01-11T19:26:29Z 2021-01-12 19:26:29 +0000 UTC https://console-openshift-console.apps.example.com/auth/callback user:full",
"oc describe useroauthaccesstokens <token_name>",
"Name: <token_name> 1 Namespace: Labels: <none> Annotations: <none> API Version: oauth.openshift.io/v1 Authorize Token: sha256~Ksckkug-9Fg_RWn_AUysPoIg-_HqmFI9zUL_CgD8wr8 Client Name: openshift-browser-client 2 Expires In: 86400 3 Inactivity Timeout Seconds: 317 4 Kind: UserOAuthAccessToken Metadata: Creation Timestamp: 2021-01-11T19:27:06Z Managed Fields: API Version: oauth.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:authorizeToken: f:clientName: f:expiresIn: f:redirectURI: f:scopes: f:userName: f:userUID: Manager: oauth-server Operation: Update Time: 2021-01-11T19:27:06Z Resource Version: 30535 Self Link: /apis/oauth.openshift.io/v1/useroauthaccesstokens/<token_name> UID: f9d00b67-ab65-489b-8080-e427fa3c6181 Redirect URI: https://oauth-openshift.apps.example.com/oauth/token/display Scopes: user:full 5 User Name: <user_name> 6 User UID: 82356ab0-95f9-4fb3-9bc0-10f1d6a6a345 Events: <none>",
"oc delete useroauthaccesstokens <token_name>",
"useroauthaccesstoken.oauth.openshift.io \"<token_name>\" deleted",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/managing-oauth-access-tokens |
12.2. Customizing Default Favorite Applications | 12.2. Customizing Default Favorite Applications Favorite applications are those visible on the GNOME Shell dash in the Activities Overview . You can use dconf to set the favorite applications for an individual user, or to set the same favorite applications for all users. 12.2.1. Setting Different Favorite Applications for Individual Users You can set the default favorite applications for an individual user by modifying their user database file found in ~/.config/dconf/user . The following sample uses dconf to set gedit , Terminal , and Nautilus as the default favorites for a user. The example code allows users to modify the list later, if they wish to do so. Example 12.3. Contents of /etc/dconf/profile : Example 12.4. Contents of ~/.config/dconf/user : Note You can also lock down the above settings to prevent users from changing them. See Section 9.5.1, "Locking Down Specific Settings" for more information. 12.2.2. Setting the Same Favorite Applications for All Users In order to have the same favorites for all users, you must modify system database files using dconf keyfiles. The following sample edits the dconf profile and then create a keyfile to set the default favorite applications for all employees in the first floor of an organization. Example 12.5. Contents of /etc/dconf/profile : Note Settings from the user database file will take precedence over the settings in the first_floor database file, but locks introduced in the first_floor database file will take priority over those present in user . For more information about locks, see Section 9.5.1, "Locking Down Specific Settings" . Example 12.6. Contents of /etc/dconf/db/first_floor.d/00_floor1_settings : Incorporate your changes into the system databases by running the dconf update command. Users must log out and back in again before the system-wide settings take effect. | [
"This line allows the user to change the default favorites later user-db:user",
"Set gedit, terminal and nautilus as default favorites [org/gnome/shell] favorite-apps = [ 'gedit.desktop' , 'gnome-terminal.desktop' , 'nautilus.desktop' ]",
"user-db:user This line defines a system database called first_floor system-db:first_floor",
"This sample sets gedit, terminal and nautilus as default favorites for all users in the first floor [org/gnome/shell] favorite-apps = [ 'gedit.desktop' , 'gnome-terminal.desktop' , 'nautilus.desktop' ]"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/default-favorites |
Chapter 2. Adding a single system to a group | Chapter 2. Adding a single system to a group In the Insights Workspaces application, you can add a single system to a group to manage it more easily. For example, you can easily mitigate vulnerabilities and update systems that are alike. With the Insights inventory group application, you can add a system to only one group. Prerequisites You have a Red Hat Hybrid Cloud Console account. You have registered the systems you plan to group with the Insights Inventory application. Procedure Access Red Hat Hybrid Cloud Console platform and log in. From the console dashboard, navigate to Red Hat Insights > RHEL > Inventory > Systems . On the System page, add a single system to a group: Click the options icon (...) near the image and click Add to group . The Add to group window open: Select one of the options: Add to an existing group: Select an existing group and click Add . Create a new group: Click the Create group button. Click Add . Verification If adding systems to the group was successful, you can see the systems added to the group on the page for your group. | null | https://docs.redhat.com/en/documentation/edge_management/1-latest/html/working_with_systems_in_the_insights_inventory_application/adding-a-single-system-to-a-group |
Installation configuration | Installation configuration OpenShift Container Platform 4.16 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installation_configuration/index |
Chapter 8. Viewing the status of the QuayRegistry object | Chapter 8. Viewing the status of the QuayRegistry object Lifecycle observability for a given Red Hat Quay deployment is reported in the status section of the corresponding QuayRegistry object. The Red Hat Quay Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Red Hat Quay or its managed dependencies. 8.1. Viewing the registry endpoint Once Red Hat Quay is ready to be used, the status.registryEndpoint field will be populated with the publicly available hostname of the registry. 8.2. Viewing the version of Red Hat Quay in use The current version of Red Hat Quay that is running will be reported in status.currentVersion . 8.3. Viewing the conditions of your Red Hat Quay deployment Certain conditions will be reported in status.conditions . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-quayregistry-status |
Block Storage Backup Guide | Block Storage Backup Guide Red Hat OpenStack Platform 16.0 Understanding, using, and managing the Block Storage backup service in OpenStack OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/block_storage_backup_guide/index |
3.2. Logical Volume Creation Overview | 3.2. Logical Volume Creation Overview The following is a summary of the steps to perform to create an LVM logical volume. Initialize the partitions you will use for the LVM volume as physical volumes (this labels them). Create a volume group. Create a logical volume. After creating the logical volume you can create and mount the file system. The examples in this document use GFS file systems. Create a GFS file system on the logical volume with the gfs_mkfs command. Create a new mount point with the mkdir command. In a clustered system, create the mount point on all nodes in the cluster. Mount the file system. You may want to add a line to the fstab file for each node in the system. Alternately, you can create and mount the GFS file system with the LVM GUI. Creating the LVM volume is machine independent, since the storage area for LVM setup information is on the physical volumes and not the machine where the volume was created. Servers that use the storage have local copies, but can recreate that from what is on the physical volumes. You can attach physical volumes to a different server if the LVM versions are compatible. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/creation_overview |
Chapter 2. Running Java applications with Shenandoah garbage collector | Chapter 2. Running Java applications with Shenandoah garbage collector You can run your Java application with the Shenandoah garbage collector (GC). Prerequisites Installed Red Hat build of OpenJDK. See Installing Red Hat build of OpenJDK 11 on Red Hat Enterprise Linux in the Installing and using Red Hat build of OpenJDK 11 on RHEL guide. Procedure Run your Java application with Shenandoah GC by using the -XX:+UseShenandoahGC JVM option. USD java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC | [
"java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_11/running-application-with-shenandoah-gc |
Chapter 1. Data Grid release information | Chapter 1. Data Grid release information Learn about new features and get the latest Data Grid release information. 1.1. What is new in Data Grid 8.5.2 Data Grid 8.5.2 improves usability, increases performance, and enhances security. Find out what is new. Simple cache metrics updates Simple cache mode now provides the same metrics as the other cache modes such as local, distributed, replicated, and invalidation. This simplifies observing cache metrics in observing and alerting systems. Support for JDBC_PING2 protocol Data Grid 8.5.2 provides the ability to use JDBC_PING2 protocol for JGroups discovery. It is recommended to use the JDBC_PING2 protocol rather than JDBC_PING protocol with Data Grid 8.5.2. For more information, see JDBC_PING2 . Data Grid 8.5.2 security update Data Grid 8.5.2 provides a security enhancement to address CVEs. You must upgrade any Data Grid 8.5.1 deployments to version 8.5.2 as soon as possible. For more information see the advisory related to this release RHSA-2024:10214 . Important Vector search queries are not supported in Data Grid. 1.2. What's new in Data Grid 8.5.1 Data Grid 8.5.1 improves usability, increases performance, and enhances security. Find out what's new. Ability to automatically reload SSL/TLS certificates From Data Grid 8.5.1 onward, when certificates are renewed, Data Grid monitors keystore files for changes and automatically reloads these files, without requiring a server or client restart. Note To ensure seamless operations during certificate rotation, use certificates that are signed by a certificate authority (CA) and configure both server and client truststores with the CA certificate. For more information, see SSL/TLS Certificate rotation . Ability to index keys for indexed remote queries Data Grid 8.5.1 introduces an Indexed type for keys. You can index the keys in a cache for indexed remote queries by defining these keys as Indexed . This enhancement means that you can index the key fields as well as the value fields, which allows the keys to be used in Ickle queries. For more information, see Queries by keys . 1.3. What's new in Data Grid 8.5.0 Data Grid 8.5.0 improves usability, increases performance, and enhances security. Find out what's new. Data Grid 8.5.0 security update Data Grid 8.5.0 provides a security enhancement to address CVEs. You must upgrade any Data Grid 8.4 deployments to version 8.5.0 as soon as possible. For more information see the advisory related to this release RHSA-2024:4460 . Support for RESP protocol endpoint The Redis serialization protocol (RESP) protocol endpoint in RHDG, which was provided as a technology preview feature in releases, is now fully supported. Additionally, the 8.5 release provides more Redis commands that you can use. For more information, see Using the RESP protocol endpoint with Data Grid . getAndSet REST operation for strong counters This release introduces a new getAndSet REST (Representational State Transfer) operation for strong counters. The getAndSet operation atomically sets values for strong counters with POST requests. If the operation is successful, Data Grid returns the value in the payload. For more information, see Performing getAndSet atomic operations on strong counters . Aggregate security realm This release introduces a new security realm called aggregate security realm. You can use an aggregate security realm to combine multiple security realms: one for authentication and the others for authorization. For more information, see Aggregate security realm . New Memcached connector The RHDG 8.5 release replaces the old Memchached connector with a new connector. The new Memcached connector provides the following improvements: Support for both TEXT and BINARY protocols Ability to use security realms for authentication Support for TLS encryption Performance improvements Auto detection of protocol Note For RHDG to auto-detect text protocol, clients must send a "fake" SET operation to authenticate on connection. If this is not possible for the applications, you must create a Memcached connector on a dedicated port without authentication. Thread dump on CacheBackpressureFullException The most likely cause of CacheBackpressureFullException exception is either hung threads or server overload. Data Grid now creates periodic thread dumps on CacheBackpressureFullException so that you can analyze the cause. By default the interval between two thread dumps is 60 seconds. Ability to set a stable topology By default, after a cluster shutdown, Data Grid waits for all nodes to join the cluster and restore the topology. However, you can now mark the current topology stable for a specific cache by using a command by using either CLI or REST. CLI command For more information, see Setting a stable Topology . REST command For more information, see Set a Stable Topology . Enhancement to ProtoStream logging in MassIndexer MassIndexer now displays protobuf message name instead of a class name in log message for Protostream objects to improve clarity of the message. OpenTelemetry Tracing integration New spans have been introduced to add tracing capabilities to container, persistence, cluster, xsite, and security so that telemetry can be exported to and consumed by OpenTelemetry. Support for JBoss Marshalling JBoss Marshalling was deprecated in Data Grid 8.4.6 and earlier versions. It is fully supported in Data Grid 8.5.0. 1.4. Removal notice for Data Grid release 8.5.0 Data Grid release 8.5.0 removes the following features. RHDG clients The following HotRod clients are no longer provided with RHDG: .NET client C++ client node.js client However, you can continue using older clients with RHDG 8.5. Java EE dependencies Support for Java EE dependencies has been removed. All applications added to the RHDG server, and client HotRod applications must be updated to use Jakarta EE dependencies. JBoss EAP modules RHDG modules for Red Hat JBoss EAP applications are no longer distributed as a part of the RHDG release. JBoss EAP users can use the Infinispan subsystem that is integrated within the JBoss EAP product release without the need to separately install RHDG modules. For more information, see EAP 8 now supports full Infinispan functionality, including query, counters, locks, and CDI . JCache CDI support RHDG 8.5 removes support for JCache (JSR 107). As an alternative, use other caching API developments in the Jakarta EE ecosystem. Java 11 support RHDG 8.5 removes support for Java 11. The minimum supported Java version for RHDG 8.5 is Java 17. Client HotRod applications that require Java 11 can continue using older versions of client libraries. Tomcat session manager Tomcat session manager is not distributed with RHDG 8.5. RHDG server on Windows Deploying RHDG server on Windows Server 2019 is no longer supported. Spring support Using RHDG with Spring Boot 2.x and Spring 5.x is no longer supported. 1.5. Supported Java versions in Data Grid 8.5 Red Hat supports different Java versions, depending on how you install Data Grid. Removal of Java 11 support In Data Grid 8.5, support for Java 11 is removed. Users of Data Grid 8.5 must upgrade their applications at least to Java 17. You can continue using older Hot Rod Java client versions in combination with the latest Data Grid Server version. However, if you continue using older version of the client you will miss fixes and enhancements. Supported Java versions in Data Grid 8.5 Embedded caches Red Hat supports Java 17 and Java 21 when using Data Grid for embedded caches in custom applications. Remote caches Red Hat supports Java 17 and Java 21 for Data Grid Server installations. For Hot Rod Java clients, Red Hat supports Java 17 and Java 21. Red Hat supports Java 17 and Java 21 for Data Grid Server, Hot Rod Java clients, and when using Data Grid for embedded caches in custom applications. Note When running Data Grid Server on bare metal installations, the JavaScript engine is not available with Java 17. Additional resources Supported Configurations for Data Grid 8.5 Data Grid Deprecated Features and Functionality | [
"topology set-stable",
"POST /rest/v2/caches/{cacheName}?action=initialize&force={FORCE}"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/red_hat_data_grid_8.5_release_notes/rhdg-releases |
Chapter 90. workbook | Chapter 90. workbook This chapter describes the commands under the workbook command. 90.1. workbook create Create new workbook. Usage: Table 90.1. Positional arguments Value Summary definition Workbook definition file Table 90.2. Command arguments Value Summary -h, --help Show this help message and exit --public With this flag workbook will be marked as "public". --namespace [NAMESPACE] Namespace to create the workbook within. Table 90.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.2. workbook definition show Show workbook definition. Usage: Table 90.7. Positional arguments Value Summary name Workbook name Table 90.8. Command arguments Value Summary -h, --help Show this help message and exit 90.3. workbook delete Delete workbook. Usage: Table 90.9. Positional arguments Value Summary workbook Name of workbook(s). Table 90.10. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to delete the workbook(s) from. 90.4. workbook list List all workbooks. Usage: Table 90.11. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 90.12. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 90.13. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 90.14. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.15. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.5. workbook show Show specific workbook. Usage: Table 90.16. Positional arguments Value Summary workbook Workbook name Table 90.17. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to get the workbook from. Table 90.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.6. workbook update Update workbook. Usage: Table 90.22. Positional arguments Value Summary definition Workbook definition file Table 90.23. Command arguments Value Summary -h, --help Show this help message and exit --namespace [NAMESPACE] Namespace to update the workbook in. --public With this flag workbook will be marked as "public". Table 90.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 90.7. workbook validate Validate workbook. Usage: Table 90.28. Positional arguments Value Summary definition Workbook definition file Table 90.29. Command arguments Value Summary -h, --help Show this help message and exit Table 90.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 90.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 90.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 90.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack workbook create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public] [--namespace [NAMESPACE]] definition",
"openstack workbook definition show [-h] name",
"openstack workbook delete [-h] [--namespace [NAMESPACE]] workbook [workbook ...]",
"openstack workbook list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS]",
"openstack workbook show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] workbook",
"openstack workbook update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--namespace [NAMESPACE]] [--public] definition",
"openstack workbook validate [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] definition"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/workbook |
function::task_open_file_handles | function::task_open_file_handles Name function::task_open_file_handles - The number of open files of the task Synopsis Arguments task task_struct pointer Description This function returns the number of open file handlers for the given task. | [
"task_open_file_handles:long(task:long)"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-task-open-file-handles |
Chapter 3. Installing a cluster on OpenStack with customizations | Chapter 3. Installing a cluster on OpenStack with customizations In OpenShift Container Platform version 4.13, you can install a customized cluster on Red Hat OpenStack Platform (RHOSP). To customize the installation, modify parameters in the install-config.yaml before you install the cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.13 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have a storage service installed in RHOSP, such as block storage (Cinder) or object storage (Swift). Object storage is the recommended storage technology for OpenShift Container Platform registry cluster deployment. For more information, see Optimizing storage . You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 3.2. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 3.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 3.2.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 3.2.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 3.2.4. Load balancing requirements for user-provisioned infrastructure Important Deployment with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Before you install OpenShift Container Platform, you can provision your own API and application ingress load balancing infrastructure to use in place of the default, internal load balancing solution. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.2. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.3. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.2.4.1. Example load balancer configuration for clusters that are deployed with user-managed load balancers This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters that are deployed with user-managed load balancers. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.1. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 3.5. Configuring an image registry with custom storage on clusters that run on RHOSP After you install a cluster on Red Hat OpenStack Platform (RHOSP), you can use a Cinder volume that is in a specific availability zone for registry storage. Procedure Create a YAML file that specifies the storage class and availability zone to use. For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name> Note OpenShift Container Platform does not verify the existence of the availability zone you choose. Verify the name of the availability zone before you apply the configuration. From a command line, apply the configuration: USD oc apply -f <storage_class_file_name> Example output storageclass.storage.k8s.io/custom-csi-storageclass created Create a YAML file that specifies a persistent volume claim (PVC) that uses your storage class and the openshift-image-registry namespace. For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: "true" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3 1 Enter the namespace openshift-image-registry . This namespace allows the Cluster Image Registry Operator to consume the PVC. 2 Optional: Adjust the volume size. 3 Enter the name of the storage class that you created. From a command line, apply the configuration: USD oc apply -f <pvc_file_name> Example output persistentvolumeclaim/csi-pvc-imageregistry created Replace the original persistent volume claim in the image registry configuration with the new claim: USD oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{"op": "replace", "path": "/spec/storage/pvc/claim", "value": "csi-pvc-imageregistry"}]' Example output config.imageregistry.operator.openshift.io/cluster patched Over the several minutes, the configuration is updated. Verification To confirm that the registry is using the resources that you defined: Verify that the PVC claim value is identical to the name that you provided in your PVC definition: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output ... status: ... managementState: Managed pvc: claim: csi-pvc-imageregistry ... Verify that the status of the PVC is Bound : USD oc get pvc -n openshift-image-registry csi-pvc-imageregistry Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m 3.6. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Important If the external network's CIDR range overlaps one of the default network ranges, you must change the matching network ranges in the install-config.yaml file before you start the installation process. The default network ranges are: Network Range machineNetwork 10.0.0.0/16 serviceNetwork 172.30.0.0/16 clusterNetwork 10.128.0.0/14 Warning If the installation program finds multiple networks with the same name, it sets one of them at random. To avoid this behavior, create unique names for resources in RHOSP. Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 3.7. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 3.8. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case for clusters that do not use Kuryr. For example: #... [LoadBalancer] use-octavia=true 1 lb-provider = "amphora" 2 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #... 1 This property enables Octavia integration. 2 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 3 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 4 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 5 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 7 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Important For installations that use Kuryr, Kuryr handles relevant services. There is no need to configure Octavia load balancing in the cloud provider. Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 3.9. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Installation configuration parameters section for more information about the available parameters. 3.10.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.11. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.11.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.4. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.11.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.5. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.11.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.6. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 3.11.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 3.7. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 3.11.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 3.8. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.loadbalancer Whether or not to use the default, internal load balancer. If the value is set to UserManaged , this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault , the cluster uses the default load balancer. UserManaged or OpenShiftManagedDefault . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 3.11.6. RHOSP parameters for failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat OpenStack Platform (RHOSP) deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder. Beginning with OpenShift Container Platform 4.13, there is a unified definition of failure domains for RHOSP deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place. In RHOSP, a port describes a network connection and maps to an interface inside a compute machine. A port also: Is defined by a network or by one more or subnets Connects a machine to one or more subnets Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to: The portTarget object with the ID control-plane while that object exists. All non-control-plane portTarget objects within its own failure domain. All networks in the machine pool's additionalNetworkIDs list. To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains . Table 3.9. RHOSP parameters for failure domains Parameter Description Values platform.openstack.failuredomains.computeAvailabilityZone An availability zone for the server. If not specified, the cluster default is used. The name of the availability zone. For example, nova-1 . platform.openstack.failuredomains.storageAvailabilityZone An availability zone for the root volume. If not specified, the cluster default is used. The name of the availability zone. For example, cinder-1 . platform.openstack.failuredomains.portTargets A list of portTarget objects, each of which defines a network connection to attach to machines within a failure domain. A list of portTarget objects. platform.openstack.failuredomains.portTargets.portTarget.id The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to control-plane . If this parameter has a different value, it is ignored. control-plane or an arbitrary string. platform.openstack.failuredomains.portTargets.portTarget.network Required. The name or ID of the network to attach to machines in the failure domain. A network object that contains either a name or UUID. For example: network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 or: network: name: my-network-1 platform.openstack.failuredomains.portTargets.portTarget.fixedIPs Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port. A list of subnet objects. Note You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone and controlPlane.rootVolume.zone fields must be left unset. 3.11.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 3.11.8. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the install-config.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an install-config.yaml file as part of the OpenShift Container Platform installation process. Procedure In the install-config.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of controlPlane.platform.openstack.type to a bare metal flavor. Change the value of compute.platform.openstack.type to a bare metal flavor. If you want to deploy your machines on a pre-existing network, change the value of platform.openstack.machinesSubnet to the RHOSP subnet UUID of the network. Control plane and compute machines must use the same subnet. An example bare metal install-config.yaml file controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 ... compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 ... platform: openstack: machinesSubnet: <subnet_UUID> 3 ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. 3 If you want to use a pre-existing network, change this value to the UUID of the RHOSP subnet. Use the updated install-config.yaml file to complete the installation process. The compute machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 3.11.9. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 3.11.9.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 3.11.9.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 3.11.10. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 3.11.11. Example installation configuration section that uses failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following section of an install-config.yaml file demonstrates the use of failure domains in a cluster to deploy on Red Hat OpenStack Platform (RHOSP): # ... controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade # ... 3.11.12. Installation configuration for a cluster on OpenStack with a user-managed load balancer Important Deployment on OpenStack with User-Managed Load Balancers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following example install-config.yaml file demonstrates how to configure a cluster that uses an external, user-managed load balancer rather than the default internal load balancer. apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 featureSet: TechPreviewNoUpgrade 3 1 Regardless of which load balancer you use, the load balancer is deployed to this subnet. 2 The UserManaged value indicates that you are using an user-managed load balancer. 3 Because user-managed load balancers are in Technology Preview, you must include the TechPreviewNoUpgrade value to deploy a cluster that uses a user-managed load balancer. 3.12. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.13. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 3.13.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 3.13.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 3.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.15. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"openstack role add --user <user> --project <project> swiftoperator",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>",
"oc apply -f <storage_class_file_name>",
"storageclass.storage.k8s.io/custom-csi-storageclass created",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3",
"oc apply -f <pvc_file_name>",
"persistentvolumeclaim/csi-pvc-imageregistry created",
"oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'",
"config.imageregistry.operator.openshift.io/cluster patched",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"status: managementState: Managed pvc: claim: csi-pvc-imageregistry",
"oc get pvc -n openshift-image-registry csi-pvc-imageregistry",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"openshift-install --dir <destination_directory> create manifests",
"vi openshift/manifests/cloud-provider-config.yaml",
"# [LoadBalancer] use-octavia=true 1 lb-provider = \"amphora\" 2 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 3 create-monitor = True 4 monitor-delay = 10s 5 monitor-timeout = 10s 6 monitor-max-retries = 1 7 #",
"oc edit configmap -n openshift-config cloud-provider-config",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }",
"network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6",
"network: name: my-network-1",
"controlPlane: platform: openstack: type: <bare_metal_control_plane_flavor> 1 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: openstack: type: <bare_metal_compute_flavor> 2 replicas: 3 platform: openstack: machinesSubnet: <subnet_UUID> 3",
"./openshift-install wait-for install-complete --log-level debug",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade",
"apiVersion: v1 baseDomain: mydomain.test compute: - name: worker platform: openstack: type: m1.xlarge replicas: 3 controlPlane: name: master platform: openstack: type: m1.xlarge replicas: 3 metadata: name: mycluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 192.168.10.0/24 platform: openstack: cloud: mycloud machinesSubnet: 8586bf1a-cc3c-4d40-bdf6-c243decc603a 1 apiVIPs: - 192.168.10.5 ingressVIPs: - 192.168.10.7 loadBalancer: type: UserManaged 2 featureSet: TechPreviewNoUpgrade 3",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/installing-openstack-installer-custom |
Chapter 10. Planning your environment according to object maximums | Chapter 10. Planning your environment according to object maximums Consider the following tested object maximums when you plan your OpenShift Container Platform cluster. These guidelines are based on the largest possible cluster. For smaller clusters, the maximums are lower. There are many factors that influence the stated thresholds, including the etcd version or storage data format. Important These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). In most cases, exceeding these numbers results in lower overall performance. It does not necessarily mean that the cluster will fail. 10.1. OpenShift Container Platform tested cluster maximums for major releases Tested Cloud Platforms for OpenShift Container Platform 3.x: Red Hat OpenStack Platform (RHOSP), Amazon Web Services and Microsoft Azure. Tested Cloud Platforms for OpenShift Container Platform 4.x: Amazon Web Services, Microsoft Azure and Google Cloud Platform. Maximum type 3.x tested maximum 4.x tested maximum Number of nodes 2,000 2,000 Number of pods [1] 150,000 150,000 Number of pods per node 250 500 [2] Number of pods per core There is no default value. There is no default value. Number of namespaces [3] 10,000 10,000 Number of builds 10,000 (Default pod RAM 512 Mi) - Pipeline Strategy 10,000 (Default pod RAM 512 Mi) - Source-to-Image (S2I) build strategy Number of pods per namespace [4] 25,000 25,000 Number of routes and back ends per Ingress Controller 2,000 per router 2,000 per router Number of secrets 80,000 80,000 Number of config maps 90,000 90,000 Number of services [5] 10,000 10,000 Number of services per namespace 5,000 5,000 Number of back-ends per service 5,000 5,000 Number of deployments per namespace [4] 2,000 2,000 Number of build configs 12,000 12,000 Number of secrets 40,000 40,000 Number of custom resource definitions (CRD) There is no default value. 512 [6] The pod count displayed here is the number of test pods. The actual number of pods depends on the application's memory, CPU, and storage requirements. This was tested on a cluster with 100 worker nodes with 500 pods per worker node. The default maxPods is still 250. To get to 500 maxPods , the cluster must be created with a maxPods set to 500 using a custom kubelet config. If you need 500 user pods, you need a hostPrefix of 22 because there are 10-15 system pods already running on the node. The maximum number of pods with attached persistent volume claims (PVC) depends on storage backend from where PVC are allocated. In our tests, only OpenShift Container Storage (OCS v4) was able to satisfy the number of pods per node discussed in this document. When there are a large number of active projects, etcd might suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, is highly recommended to free etcd storage. There are a number of control loops in the system that must iterate over all objects in a given namespace as a reaction to some changes in state. Having a large number of objects of a given type in a single namespace can make those loops expensive and slow down processing given state changes. The limit assumes that the system has enough CPU, memory, and disk to satisfy the application requirements. Each service port and each service back-end has a corresponding entry in iptables. The number of back-ends of a given service impact the size of the endpoints objects, which impacts the size of data that is being sent all over the system. OpenShift Container Platform has a limit of 512 total custom resource definitions (CRD), including those installed by OpenShift Container Platform, products integrating with OpenShift Container Platform and user created CRDs. If there are more than 512 CRDs created, then there is a possibility that oc commands requests may be throttled. Note Red Hat does not provide direct guidance on sizing your OpenShift Container Platform cluster. This is because determining whether your cluster is within the supported bounds of OpenShift Container Platform requires careful consideration of all the multidimensional factors that limit the cluster scale. 10.2. OpenShift Container Platform environment and configuration on which the cluster maximums are tested AWS cloud platform: Node Flavor vCPU RAM(GiB) Disk type Disk size(GiB)/IOS Count Region Master/etcd [1] r5.4xlarge 16 128 io1 220 / 3000 3 us-west-2 Infra [2] m5.12xlarge 48 192 gp2 100 3 us-west-2 Workload [3] m5.4xlarge 16 64 gp2 500 [4] 1 us-west-2 Worker m5.2xlarge 8 32 gp2 100 3/25/250/500 [5] us-west-2 io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale. Workload node is dedicated to run performance and scalability workload generators. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts. 10.3. How to plan your environment according to tested cluster maximums Important Oversubscribing the physical resources on a node affects resource guarantees the Kubernetes scheduler makes during pod placement. Learn what measures you can take to avoid memory swapping. Some of the tested maximums are stretched only in a single dimension. They will vary when many objects are running on the cluster. The numbers noted in this documentation are based on Red Hat's test methodology, setup, configuration, and tunings. These numbers can vary based on your own individual setup and environments. While planning your environment, determine how many pods are expected to fit per node: The current maximum number of pods per node is 250. However, the number of pods that fit on a node is dependent on the application itself. Consider the application's memory, CPU, and storage requirements, as described in How to plan your environment according to application requirements . Example scenario If you want to scope your cluster for 2200 pods per cluster, you would need at least five nodes, assuming that there are 500 maximum pods per node: If you increase the number of nodes to 20, then the pod distribution changes to 110 pods per node: Where: 10.4. How to plan your environment according to application requirements Consider an example application environment: Pod type Pod quantity Max memory CPU cores Persistent storage apache 100 500 MB 0.5 1 GB node.js 200 1 GB 1 1 GB postgresql 100 1 GB 2 10 GB JBoss EAP 100 1 GB 1 1 GB Extrapolated requirements: 550 CPU cores, 450GB RAM, and 1.4TB storage. Instance size for nodes can be modulated up or down, depending on your preference. Nodes are often resource overcommitted. In this deployment scenario, you can choose to run additional smaller nodes or fewer larger nodes to provide the same amount of resources. Factors such as operational agility and cost-per-instance should be considered. Node type Quantity CPUs RAM (GB) Nodes (option 1) 100 4 16 Nodes (option 2) 50 8 32 Nodes (option 3) 25 16 64 Some applications lend themselves well to overcommitted environments, and some do not. Most Java applications and applications that use huge pages are examples of applications that would not allow for overcommitment. That memory can not be used for other applications. In the example above, the environment would be roughly 30 percent overcommitted, a common ratio. The application pods can access a service either by using environment variables or DNS. If using environment variables, for each active service the variables are injected by the kubelet when a pod is run on a node. A cluster-aware DNS server watches the Kubernetes API for new services and creates a set of DNS records for each one. If DNS is enabled throughout your cluster, then all pods should automatically be able to resolve services by their DNS name. Service discovery using DNS can be used in case you must go beyond 5000 services. When using environment variables for service discovery, the argument list exceeds the allowed length after 5000 services in a namespace, then the pods and deployments will start failing. Disable the service links in the deployment's service specification file to overcome this: --- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: "USD{IMAGE}" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR2_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR3_USD{IDENTIFIER} value: "USD{ENV_VALUE}" - name: ENVVAR4_USD{IDENTIFIER} value: "USD{ENV_VALUE}" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: "[A-Za-z0-9]{255}" required: false labels: template: deployment-config-template The number of application pods that can run in a namespace is dependent on the number of services and the length of the service name when the environment variables are used for service discovery. ARG_MAX on the system defines the maximum argument length for a new process and it is set to 2097152 KiB by default. The Kubelet injects environment variables in to each pod scheduled to run in the namespace including: <SERVICE_NAME>_SERVICE_HOST=<IP> <SERVICE_NAME>_SERVICE_PORT=<PORT> <SERVICE_NAME>_PORT=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP=tcp://<IP>:<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_PROTO=tcp <SERVICE_NAME>_PORT_<PORT>_TCP_PORT=<PORT> <SERVICE_NAME>_PORT_<PORT>_TCP_ADDR=<ADDR> The pods in the namespace will start to fail if the argument length exceeds the allowed value and the number of characters in a service name impacts it. For example, in a namespace with 5000 services, the limit on the service name is 33 characters, which enables you to run 5000 pods in the namespace. | [
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 500 = 4.4",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 portalIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/planning-your-environment-according-to-object-maximums |
Appendix D. Ceph File System mirrors configuration reference | Appendix D. Ceph File System mirrors configuration reference This section lists configuration options for Ceph File System (CephFS) mirrors. cephfs_mirror_max_concurrent_directory_syncs Description Maximum number of directory snapshots that can be synchronized concurrently by cephfs-mirror daemon. Controls the number of synchronization threads. Type Integer Default 3 Min 1 cephfs_mirror_action_update_interval Description Interval in seconds to process pending mirror update actions. Type secs Default 2 Min 1 cephfs_mirror_restart_mirror_on_blocklist_interval Description Interval in seconds to restart blocklisted mirror instances. Setting to zero ( 0 ) disables restarting blocklisted instances. Type secs Default 30 Min 0 cephfs_mirror_max_snapshot_sync_per_cycle Description Maximum number of snapshots to mirror when a directory is picked up for mirroring by worker threads. Type Integer Default 3 Min 1 cephfs_mirror_directory_scan_interval Description Interval in seconds to scan configured directories for snapshot mirroring. Type Integer Default 10 Min 1 cephfs_mirror_max_consecutive_failures_per_directory Description Number of consecutive snapshot synchronization failures to mark a directory as "failed". Failed directories are retried for synchronization less frequently. Type Integer Default 10 Min 0 cephfs_mirror_retry_failed_directories_interval Description Interval in seconds to retry synchronization for failed directories. Type Integer Default 60 Min 1 cephfs_mirror_restart_mirror_on_failure_interval Description Interval in seconds to restart failed mirror instances. Setting to zero ( 0 ) disables restarting failed mirror instances. Type secs Default 20 Min 0 cephfs_mirror_mount_timeout Description Timeout in seconds for mounting primary or secondary CephFS by the cephfs-mirror daemon. Setting this to a higher value could result in the mirror daemon getting stalled when mounting a file system if the cluster is not reachable. This option is used to override the usual client_mount_timeout . Type secs Default 10 Min 0 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/file_system_guide/ceph-file-system-mirrors-configuration-reference_fs |
Chapter 8. Runtime verification of the real-time kernel | Chapter 8. Runtime verification of the real-time kernel Runtime verification is a lightweight and rigorous method to check the behavioral equivalence between system events and their formal specifications. Runtime verification has monitors integrated in the kernel that attach to tracepoints . If a system state deviates from defined specifications, the runtime verification program activates reactors to inform or enable a reaction, such as capturing the event in log files or a system shutdown to prevent failure propagation in an extreme case. 8.1. Runtime monitors and reactors The runtime verification (RV) monitors are encapsulated inside the RV monitor abstraction and coordinate between the defined specifications and the kernel trace to capture runtime events in trace files. The RV monitor includes: Reference Model is a reference model of the system. Monitor Instance(s) is a set of instance for a monitor, such as a per-CPU monitor or a per-task monitor. Helper functions that connect the monitor to the system. In addition to verifying and monitoring a system at runtime, you can enable a response to an unexpected system event. The forms of reaction can vary from capturing an event in the trace file to initiating an extreme reaction, such as a shut-down to avoid a system failure on safety critical systems. Reactors are reaction methods available for RV monitors to define reactions to system events as required. By default, monitors provide a trace output of the actions. 8.2. Online runtime monitors Runtime verification (RV) monitors are classified into following types: Online monitors capture events in the trace while the system is running. Online monitors are synchronous if the event processing is attached to the system execution. This will block the system during the event monitoring. Online monitors are asynchronous, if the execution is detached from the system and is run on a different machine. This however requires saved execution log files. Offline monitors process traces that are generated after the events have occurred. Offline runtime verification capture information by reading the saved trace log files generally from a permanent storage. Offline monitors can work only if you have the events saved in a file. 8.3. The user interface The user interface is located at /sys/kernel/tracing/rv and resembles the tracing interface. The user interface includes the mentioned files and folders. Settings Description Example commands available_monitors Displays the available monitors one per line. # cat available_monitors available_reactors Display the available reactors one per line. # cat available_reactors enabled_monitors Displays enabled monitors one per line. You can enable more than one monitor at the same time. Writing a monitor name with a '!' prefix disables the monitor and truncating the file disables all enabled monitors. # cat enabled_monitors # echo wip > enabled_monitors # echo '!wip'>> enabled_monitors monitors/ The monitors/ directory resembles the events directory on the tracefs file system with each monitor having its own directory inside monitors/ . # cd monitors/wip/ monitors/MONITOR/reactors Lists available reactors with the select reaction for a specific MONITOR inside "[]". The default is the no operation ( nop ) reactor. Writing the name of a reactor integrates it to a specific MONITOR. # cat monitors/wip/reactors monitoring_on Initiates the tracing_on and the tracing_off switcher in the trace interface. Writing 0 stops the monitoring and 1 continues the monitoring. The switcher does not disable enabled monitors but stops the per-entity monitors from monitoring the events. reacting_on Enables reactors. Writing 0 disables reactions and 1 enables reactions. monitors/MONITOR/desc Displays the Monitor description monitors/MONITOR/enable Displays the current status of the Monitor. Writing 0 disables the Monitor and 1 enables the Monitor. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/runtime-verification-of-the-real-time-kernel_understanding-rhel-for-real-time-core-concepts |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Procedure Copy the examples to a location of your choosing. USD cp -r /usr/share/proton/examples/cpp cpp-examples Create a build directory and change to that directory: USD mkdir cpp-examples/bld USD cd cpp-examples/bld Use cmake to configure the build and use make to compile the examples. USD cmake .. USD make Run the helloworld program. USD ./helloworld Hello World! | [
"cp -r /usr/share/proton/examples/cpp cpp-examples",
"mkdir cpp-examples/bld cd cpp-examples/bld",
"cmake .. make",
"./helloworld Hello World!"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/getting_started |
Part III. Managing assets in Business Central | Part III. Managing assets in Business Central As a process administrator, you can use Business Central in Red Hat Decision Manager to manage assets, such as rules, business processes, and decision tables. Prerequisites Red Hat JBoss Enterprise Application Platform 7.4 is installed. For details, see Red Hat JBoss Enterprise Application Platform 7.4 Installation Guide . Red Hat Process Automation Manager is installed and configured with KIE Server. For more information see Installing and configuring Red Hat Decision Manager on Red Hat JBoss EAP 7.4 . Red Hat Decision Manager is running and you can log in to Business Central with the developer role. For more information, see Planning a Red Hat Decision Manager installation . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assembly-managing-assets |
Chapter 6. Known and Resolved Issues | Chapter 6. Known and Resolved Issues 6.1. Known Issues BZ-1200822 - JSR-107 Support for clustered caches in HotRod implementation When creating a new cache (which is not defined in server configuration file) in HotRod implementation of JSR-107, the cache is created as local only in one of the servers. This behavior requires class org.jboss.as.controller.client.ModelControllerClient to be present on the classpath. As a workaround use a clustered cache defined in the server configuration file. This still requires cacheManager.createCache(cacheName, configuration) to be invoked before accessing the cache for the first time. BZ-1204813 - JSR-107 Support for cacheResolverFactory annotation property JCache annotations provides a way to define a custom CacheResolverFactory , used to produce CacheResolver ; this class's purpose is to decide which cache is used for storing results of annotated methods; however, the support for specifying a CacheResolver is not provided yet. As a workaround, define a CDI ManagedCacheResolver which will be used instead. BZ-1223290 - JPA Cache Store not working properly on Weblogic A JPA Cache Store deployed to WebLogic servers throws a NullPointerException after the following error message: This is a known issue in Red Hat JBoss Data Grid 6.6.0, and no workaround exists at this time. BZ-1158839 - Clustered cache with FileStore (shared=false) is inconsistent after restarting one node if entries are deleted during restart In Red Hat JBoss Data Grid, when a node restarts, it does not automatically purge entries from its local cache store. As a result, the Administrator starting the node must change the node configuration manually to set the cache store to be purged when the node is starting. If the configuration is not changed, the cache may be inconsistent (removed entries can appear to be present). This is a known issue in Red Hat JBoss Data Grid 6.6.0, and no workaround exists at this time. BZ-1114080 - HR client SASL MD5 against LDAP fails In Red Hat JBoss Data Grid, the server does not support pass-through MD5 authentication against LDAP. As a result, the Hot Rod client is unable to authenticate to the JBoss Data Grid server via MD5 is the authentication is backed by the LDAP server. This is a known issue in Red Hat JBoss Data Grid 6.6.0 and a workaround is to use the PLAIN authentication over end-to-end SSL encryption. BZ-1024373 - Default optimistic locking configuration leads to inconsistency In Red Hat JBoss Data Grid, transactional caches are configured with optimistic locking by default. Concurrent replace() calls can return true under contention and transactions might unexpectedly commit. Two concurrent commands, replace(key, A, B) and replace(key, A, C) may both overwrite the entry. The command which is finalized later wins, overwriting an unexpected value with new value. This is a known issue in Red Hat JBoss Data Grid 6.6.0. As a workaround, enable write skew check and the REPEATABLE_READ isolation level. This results in concurrent replace operations working as expected. BZ-1293575 - Rolling upgrade fails with keySet larger than 2 GB Rolling upgrades fail if the key set is larger than 2 GB of memory. The process fails when calling recordKnownGlobalKeyset because the keys cannot be dumped into a single byte array in the source cluster. This is a known issue in Red Hat JBoss Data Grid 6.6.0, and no workaround exists at this time. BZ-1300133 - JMX attribute evictions is always zero in Statistics and ClusterCacheStats MBeans The evictions attribute of Statistics and ClusterCacheStats components of the Cache MBean return zero even though some eviction operations have been successfully performed. This issue only affects statistics, not the actual eviction process. This is a known issue in Red Hat JBoss Data Grid 6.6.0, and no workaround exists at this time. BZ-1273411 - Cannot access cache with authorization enabled when using REST protocol When authorization is configured for a cache, then any access to the cache via REST endpoint results in a security exception. A user is not able to access the cache since the security Subject representing the user is not properly defined, and the user cannot be authorized to access the cache. This is a known issue in Red Hat JBoss Data Grid 6.6.0, and no workaround exists at this time. Report a bug | [
"Entity manager factory name (org.infinispan.persistence.jpa) is already registered"
]
| https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/chap-known_and_resolved_issues |
11.3.2. Postfix | 11.3.2. Postfix Originally developed at IBM by security expert and programmer Wietse Venema, Postfix is a Sendmail-compatible MTA that is designed to be secure, fast, and easy to configure. To improve security, Postfix uses a modular design, where small processes with limited privileges are launched by a master daemon. The smaller, less privileged processes perform very specific tasks related to the various stages of mail delivery and run in a change rooted environment to limit the effects of attacks. Configuring Postfix to accept network connections from hosts other than the local computer takes only a few minor changes in its configuration file. Yet for those with more complex needs, Postfix provides a variety of configuration options, as well as third party add ons that make it a very versatile and full-featured MTA. The configuration files for Postfix are human readable and support upward of 250 directives. Unlike Sendmail, no macro processing is required for changes to take effect and the majority of the most commonly used options are described in the heavily commented files. Important Before using Postfix, the default MTA must be switched from Sendmail to Postfix. Refer to the chapter called Mail Transport Agent (MTA) Configuration in the System Administrators Guide for further details. 11.3.2.1. The Default Postfix Installation The Postfix executable is /usr/sbin/postfix . This daemon launches all related processes needed to handle mail delivery. Postfix stores its configuration files in the /etc/postfix/ directory. The following is a list of the more commonly used files: access - Used for access control, this file specifies which hosts are allowed to connect to Postfix. aliases - A configurable list required by the mail protocol. main.cf - The global Postfix configuration file. The majority of configuration options are specified in this file. master.cf - Specifies how Postfix interacts with various processes to accomplish mail delivery. transport - Maps email addresses to relay hosts. Important The default /etc/postfix/main.cf file does not allow Postfix to accept network connections from a host other than the local computer. For instructions on configuring Postfix as a server for other clients, refer to Section 11.3.2.2, "Basic Postfix Configuration" . When changing some options within files in the /etc/postfix/ directory, it may be necessary to restart the postfix service for the changes to take effect. The easiest way to do this is to type the following command: | [
"service postfix restart"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-email-mta-postfix |
5.7. Managing Nodes with Fence Devices | 5.7. Managing Nodes with Fence Devices You can fence a node manually with the following command. If you specify --off this will use the off API call to stonith which will turn the node off instead of rebooting it. In a situation where no stonith device is able to fence a node even if it is no longer active, the cluster may not be able to recover the resources on the node. If this occurs, after manually ensuring that the node is powered down you can enter the following command to confirm to the cluster that the node is powered down and free its resources for recovery. Warning If the node you specify is not actually off, but running the cluster software or services normally controlled by the cluster, data corruption/cluster failure will occur. | [
"pcs stonith fence node [--off]",
"pcs stonith confirm node"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-fencedevicemanage-haar |
Chapter 1. Ansible Automation Platform Central Authentication for automation hub | Chapter 1. Ansible Automation Platform Central Authentication for automation hub To enable Ansible Automation Platform Central Authentication for your automation hub, start by downloading the Red Hat Ansible Automation Platform installer then proceed with the necessary set up procedures as detailed in this guide. Important The installer in this guide will install central authentication for a basic standalone deployment. Standalone mode only runs one central authentication server instance, and thus will not be usable for clustered deployments. Standalone mode can be useful to test drive and play with the features of central authentication, but it is not recommended that you use standalone mode in production as you will only have a single point of failure. To install central authentication in a different deployment mode, please see this guide for more deployment options. 1.1. System Requirements There are several minimum requirements to install and run Ansible Automation Platform Central Authentication: A supported RHEL8 based server that runs Java Java 8 JDK zip or gzip and tar At least 512mb of RAM At least 1gb of disk space A shared external database like PostgreSQL, MySQL, Oracle, etc. if you want to run central authentication in a cluster. See the Database Configuration section of the Red Hat Single Sign-On Server Installation and Configuration guide for more information. Network multicast support on your machine if you want to run in a cluster. central authentication can be clustered without multicast, but this requires some configuration changes. See the Clustering section of the Red Hat Single Sign-On Server Installation and Configuration guide for more information. On Linux, it is recommended to use /dev/urandom as a source of random data to prevent central authentication hanging due to lack of available entropy, unless /dev/random usage is mandated by your security policy. To achieve that on Oracle JDK 8 and OpenJDK 8, set the java.security.egd system property on startup to file:/dev/urandom . 1.2. Installing Ansible Automation Platform Central Authentication for use with automation hub The Ansible Automation Platform Central Authentication installation will be included with your Red Hat Ansible Automation Platform installer. Install the Ansible Automation Platform using the following procedures, then configure the necessary parameters in your inventory file to successfully install both the Ansible Automation Platform and central authentication. 1.2.1. Choosing and obtaining a Red Hat Ansible Automation Platform installer Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the following scenarios and decide on which Red Hat Ansible Automation Platform installer meets your needs. Note A valid Red Hat customer account is required to access Red Hat Ansible Automation Platform installer downloads on the Red Hat Customer Portal. Installing with internet access Choose the Red Hat Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your Ansible Automation Platform installer. Tarball install Navigate to the Red Hat Ansible Automation Platform download page. Click Download Now for the Ansible Automation Platform <latest-version> Setup . Extract the files: USD tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz RPM install Install Ansible Automation Platform Installer Package v.2.4 for RHEL 8 for x86_64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer v.2.4 for RHEL 9 for x86-64 USD sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer Note dnf install enables the repo as the repo is disabled by default. When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory. Installing without internet access Use the Red Hat Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive. Navigate to the Red Hat Ansible Automation Platform download page. Click Download Now for the Ansible Automation Platform <latest-version> Setup Bundle . Extract the files: USD tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz 1.2.2. Configuring the Red Hat Ansible Automation Platform installer Before running the installer, edit the inventory file found in the installer package to configure the installation of automation hub and Ansible Automation Platform Central Authentication. Note Provide a reachable IP address for the [automationhub] host to ensure users can sync content from Private Automation Hub from a different node and push new images to the container registry. Navigate to the installer directory: Online installer: USD cd ansible-automation-platform-setup-<latest-version> Bundled installer: USD cd ansible-automation-platform-setup-bundle-<latest-version> Open the inventory file using a text editor. Edit the inventory file parameters under [automationhub] to specify an installation of automation hub host: Add group host information under [automationhub] using an IP address or FQDN for the automation hub location. Enter passwords for automationhub_admin_password , automationhub_pg_password , and any additional parameters based on your installation specifications. Enter a password in the sso_keystore_password field. Edit the inventory file parameters under [SSO] to specify a host on which to install central authentication: Enter a password in the sso_console_admin_password field, and any additional parameters based on your installation specifications. 1.2.3. Running the Red Hat Ansible Automation Platform installer With the inventory file updated, run the installer using the setup.sh playbook found in the installer package. Run the setup.sh playbook: USD ./setup.sh 1.2.4. Log in as a central authentication admin user With Red Hat Ansible Automation Platform installed, log in as an admin user to the central authentication server using the admin credentials that you specified in your inventory file. Navigate to your Ansible Automation Platform Central Authentication instance. Login using the admin credentials you specified in your inventory file, in the sso_console_admin_username and sso_console_admin_password fields . With Ansible Automation Platform Central Authentication successfully installed, and the admin user logged in, you can proceed by adding a user storage provider (such as LDAP) using the following procedures. | [
"tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer",
"sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer",
"tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz",
"cd ansible-automation-platform-setup-<latest-version>",
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"./setup.sh"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_and_configuring_central_authentication_for_the_ansible_automation_platform/assembly-central-auth-hub |
1.5. Configuring Red Hat High Availability Add-On Software | 1.5. Configuring Red Hat High Availability Add-On Software Configuring Red Hat High Availability Add-On software consists of using configuration tools to specify the relationship among the cluster components. The following cluster configuration tools are available with Red Hat High Availability Add-On: Conga - This is a comprehensive user interface for installing, configuring, and managing Red Hat High Availability Add-On. Refer to Chapter 4, Configuring Red Hat High Availability Add-On With Conga and Chapter 5, Managing Red Hat High Availability Add-On With Conga for information about configuring and managing High Availability Add-On with Conga . The ccs command - This command configures and manages Red Hat High Availability Add-On. Refer to Chapter 6, Configuring Red Hat High Availability Add-On With the ccs Command and Chapter 7, Managing Red Hat High Availability Add-On With ccs for information about configuring and managing High Availability Add-On with the ccs command. Command-line tools - This is a set of command-line tools for configuring and managing Red Hat High Availability Add-On. Refer to Chapter 8, Configuring Red Hat High Availability Manually and Chapter 9, Managing Red Hat High Availability Add-On With Command Line Tools for information about configuring and managing a cluster with command-line tools. Refer to Appendix E, Command Line Tools Summary for a summary of preferred command-line tools. Note system-config-cluster is not available in Red Hat Enterprise Linux 6. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-cluster-CA |
Chapter 12. Networking | Chapter 12. Networking NetworkManager-openswan now supports libreswan In Red Hat Enterprise Linux 6.8, the openswan IPsec implementation is considered obsolete and replaced by the libreswan implementation. The NetworkManager-openswan package now supports both openswan and libreswan in order to facilitate migration. (BZ#1267394) New package: chrony A new package, chrony , has been added to Red Hat Enterprise Linux 6. chrony is a versatile implementation of the Network Time Protocol (NTP), which can usually synchronize the system clock with a better accuracy than the ntpd daemon from the ntp package. It can be also used with the timemaster service from the linuxptp package to synchronize the clock to Precision Time Protocol (PTP) domains with sub-microsecond accuracy if hardware timestamping is available, and provide a fallback to other PTP domains or NTP sources. (BZ#1274811) New packages: ldns The ldns packages contain a library with the aim to simplify DNS programming in C. All low-level DNS/DNSSEC operations are supported. A higher level API has been defined which allows a programmer to, for instance, create or sign packets. (BZ#1284961) wpa_supplicant can now send logs into the syslog Previously, wpa_supplicant could only save log messages into the /var/log/wpa_supplicant.log file. This update adds the capability to save log messages into the system log, allowing you to use additional features provided by syslog such as remote logging. To activate this feature, add the new -s option into OTHER_ARGS in the /etc/sysconfig/wpa_supplicant configuration file. (BZ#822128) Enhancements in system-config-network The Network Configuration tool (the system-config-network package) has received multiple user interface improvements in this release. Notable enhancements include additional fields for the PEERDNS and ONBOOT settings and an added Delete button in the list of interfaces. (BZ#1214729) New packages: unbound Unbound is a validating, recursive, and caching DNS resolver. It is designed as a set of modular components that also support DNS Security Extensions (DNSSEC). (BZ#1284964) nm-connection-editor now allows a higher range of VLAN ids The VLAN id is no longer limited to the range 0-100 in nm-connection-editor . The new allowed range is between 0 and 4095. (BZ#1258218) NetworkManager supports locking Wi-Fi network connections to a specific radio frequency band NetworkManager now allows you to specify a certain frequency band such for a Wi-Fi connection. To lock a connection to a certain band, use the new BAND= option in the connection configuration file in the /etc/sysconfig/network-scripts/ directory. Values for this option are based on the IEEE 802.11 protocol specifications; to specify the 2.4 GHz band, use BAND=bg , and to specify the 5 GHz band, use BAND=a . (BZ#1254070) NetworkManager now supports iBFT A plug-in for iSCSI Boot Firmware Table (iBFT) configuration has been added to NetworkManager . This plug-in ensures that initial network configuration for hosts booting from iSCSI in a VLAN is correct. (BZ#1198325) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_networking |
Deploying JBoss EAP on Amazon Web Services | Deploying JBoss EAP on Amazon Web Services Red Hat JBoss Enterprise Application Platform 8.0 For Use with Red Hat JBoss Enterprise Application Platform 8.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/index |
6.6. Listing Fence Devices and Fence Device Options | 6.6. Listing Fence Devices and Fence Device Options You can use the ccs command to print a list of available fence devices and to print a list of options for each available fence type. You can also use the ccs command to print a list of fence devices currently configured for your cluster. To print a list of fence devices currently available for your cluster, execute the following command: For example, the following command lists the fence devices available on the cluster node node1 , showing sample output. To print a list of the options you can specify for a particular fence type, execute the following command: For example, the following command lists the fence options for the fence_wti fence agent. To print a list of fence devices currently configured for your cluster, execute the following command: | [
"ccs -h host --lsfenceopts",
"ccs -h node1 --lsfenceopts fence_rps10 - RPS10 Serial Switch fence_vixel - No description available fence_egenera - No description available fence_xcat - No description available fence_na - Node Assassin fence_apc - Fence agent for APC over telnet/ssh fence_apc_snmp - Fence agent for APC over SNMP fence_bladecenter - Fence agent for IBM BladeCenter fence_bladecenter_snmp - Fence agent for IBM BladeCenter over SNMP fence_cisco_mds - Fence agent for Cisco MDS fence_cisco_ucs - Fence agent for Cisco UCS fence_drac5 - Fence agent for Dell DRAC CMC/5 fence_eps - Fence agent for ePowerSwitch fence_ibmblade - Fence agent for IBM BladeCenter over SNMP fence_ifmib - Fence agent for IF MIB fence_ilo - Fence agent for HP iLO fence_ilo_mp - Fence agent for HP iLO MP fence_intelmodular - Fence agent for Intel Modular fence_ipmilan - Fence agent for IPMI over LAN fence_kdump - Fence agent for use with kdump fence_rhevm - Fence agent for RHEV-M REST API fence_rsa - Fence agent for IBM RSA fence_sanbox2 - Fence agent for QLogic SANBox2 FC switches fence_scsi - fence agent for SCSI-3 persistent reservations fence_virsh - Fence agent for virsh fence_virt - Fence agent for virtual machines fence_vmware - Fence agent for VMware fence_vmware_soap - Fence agent for VMware over SOAP API fence_wti - Fence agent for WTI fence_xvm - Fence agent for virtual machines",
"ccs -h host --lsfenceopts fence_type",
"ccs -h node1 --lsfenceopts fence_wti fence_wti - Fence agent for WTI Required Options: Optional Options: option: No description available action: Fencing Action ipaddr: IP Address or Hostname login: Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection identity_file: Identity file for ssh port: Physical plug number or name of virtual machine inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on",
"ccs -h host --lsfencedev"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-list-fence-devices-ccs-CA |
1.5. Network | 1.5. Network The Red Hat Virtualization network architecture facilitates connectivity between the different elements of the Red Hat Virtualization environment. The network architecture not only supports network connectivity, it also allows for network segregation. Figure 1.3. Network Architecture Networking is defined in Red Hat Virtualization in several layers. The underlying physical networking infrastructure must be in place and configured to allow connectivity between the hardware and the logical components of the Red Hat Virtualization environment. Networking Infrastructure Layer The Red Hat Virtualization network architecture relies on some common hardware and software devices: Network Interface Controllers (NICs) are physical network interface devices that connect a host to the network. Virtual NICs (vNICs) are logical NICs that operate using the host's physical NICs. They provide network connectivity to virtual machines. Bonds bind multiple NICs into a single interface. Bridges are a packet-forwarding technique for packet-switching networks. They form the basis of virtual machine logical networks. Logical Networks Logical networks allow segregation of network traffic based on environment requirements. The types of logical network are: logical networks that carry virtual machine network traffic, logical networks that do not carry virtual machine network traffic, optional logical networks, and required networks. All logical networks can either be required or optional. A logical network that carries virtual machine network traffic is implemented at the host level as a software bridge device. By default, one logical network is defined during the installation of the Red Hat Virtualization Manager: the ovirtmgmt management network. Other logical networks that can be added by an administrator are: a dedicated storage logical network, and a dedicated display logical network. Logical networks that do not carry virtual machine traffic do not have an associated bridge device on hosts. They are associated with host network interfaces directly. Red Hat Virtualization segregates management-related network traffic from migration-related network traffic. This makes it possible to use a dedicated network (without routing) for live migration, and ensures that the management network (ovirtmgmt) does not lose its connection to hypervisors during migrations. Explanation of logical networks on different layers Logical networks have different implications for each layer of the virtualization environment. Data Center Layer Logical networks are defined at the data center level. Each data center has the ovirtmgmt management network by default. Further logical networks are optional but recommended. Designation as a VM Network and a custom MTU can be set at the data center level. A logical network that is defined for a data center must also be added to the clusters that use the logical network. Cluster Layer Logical networks are made available from a data center, and must be added to the clusters that will use them. Each cluster is connected to the management network by default. You can optionally add to a cluster logical networks that have been defined for the cluster's parent data center. When a required logical network has been added to a cluster, it must be implemented for each host in the cluster. Optional logical networks can be added to hosts as needed. Host Layer Virtual machine logical networks are implemented for each host in a cluster as a software bridge device associated with a given network interface. Non-virtual machine logical networks do not have associated bridges, and are associated with host network interfaces directly. Each host has the management network implemented as a bridge using one of its network devices as a result of being included in a Red Hat Virtualization environment. Further required logical networks that have been added to a cluster must be associated with network interfaces on each host to become operational for the cluster. Virtual Machine Layer Logical networks can be made available to virtual machines in the same way that a network can be made available to a physical machine. A virtual machine can have its virtual NIC connected to any virtual machine logical network that has been implemented on the host that runs it. The virtual machine then gains connectivity to any other devices or destinations that are available on the logical network it is connected to. Example 1.1. Management Network The management logical network, named ovirtmgmt , is created automatically when the Red Hat Virtualization Manager is installed. The ovirtmgmt network is dedicated to management traffic between the Red Hat Virtualization Manager and hosts. If no other specifically purposed bridges are set up, ovirtmgmt is the default bridge for all traffic. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/network |
Chapter 3. Configuring messaging protocols in network connections | Chapter 3. Configuring messaging protocols in network connections AMQ Broker has a pluggable protocol architecture, so that you can easily enable one or more protocols for a network connection. The broker supports the following protocols: AMQP MQTT OpenWire STOMP Note In addition to the protocols above, the broker also supports its own native protocol known as "Core". Past versions of this protocol were known as "HornetQ" and used by Red Hat JBoss Enterprise Application Platform. 3.1. Configuring a network connection to use a messaging protocol You must associate a protocol with a network connection before you can use it. (See Chapter 2, Configuring acceptors and connectors in network connections for more information about how to create and configure network connections.) The default configuration, located in the file <broker_instance_dir> /etc/broker.xml , includes several connections already defined. For convenience, AMQ Broker includes an acceptor for each supported protocol, plus a default acceptor that supports all protocols. Overview of default acceptors Shown below are the acceptors included by default in the broker.xml configuration file. <configuration> <core> ... <acceptors> <!-- All-protocols acceptor --> <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> ... </core> </configuration> The only requirement to enable a protocol on a given network connnection is to add the protocols parameter to the URI for the acceptor. The value of the parameter must be a comma separated list of protocol names. If the protocol parameter is omitted from the URI, all protocols are enabled. For example, to create an acceptor for receiving messages on port 3232 using the AMQP protocol, follow these steps: Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the following line to the <acceptors> stanza: <acceptor name="ampq">tcp://0.0.0.0:3232?protocols=AMQP</acceptor> Additional parameters in default acceptors In a minimal acceptor configuration, you specify a protocol as part of the connection URI. However, the default acceptors in the broker.xml configuration file have some additional parameters configured. The following table details the additional parameters configured for the default acceptors. Acceptor(s) Parameter Description All-protocols acceptor AMQP STOMP tcpSendBufferSize Size of the TCP send buffer in bytes. The default value is 32768 . tcpReceiveBufferSize Size of the TCP receive buffer in bytes. The default value is 32768 . TCP buffer sizes should be tuned according to the bandwidth and latency of your network. In summary TCP send/receive buffer sizes should be calculated as: buffer_size = bandwidth * RTT. Where bandwidth is in bytes per second and network round trip time (RTT) is in seconds. RTT can be easily measured using the ping utility. For fast networks you may want to increase the buffer sizes from the defaults. All-protocols acceptor AMQP STOMP HornetQ MQTT useEpoll Use Netty epoll if using a system (Linux) that supports it. The Netty native transport offers better performance than the NIO transport. The default value of this option is true . If you set the option to false , NIO is used. All-protocols acceptor AMQP amqpCredits Maximum number of messages that an AMQP producer can send, regardless of the total message size. The default value is 1000 . To learn more about how credits are used to block AMQP messages, see Section 7.3.2, "Blocking AMQP producers" . All-protocols acceptor AMQP amqpLowCredits Lower threshold at which the broker replenishes producer credits. The default value is 300 . When the producer reaches this threshold, the broker sends the producer sufficient credits to restore the amqpCredits value. To learn more about how credits are used to block AMQP messages, see Section 7.3.2, "Blocking AMQP producers" . HornetQ compatibility acceptor anycastPrefix Prefix that clients use to specify the anycast routing type when connecting to an address that uses both anycast and multicast . The default value is jms.queue . For more information about configuring a prefix to enable clients to specify a routing type when connecting to an address, see Section 4.6, "Adding a routing type to an acceptor configuration" . multicastPrefix Prefix that clients use to specify the multicast routing type when connecting to an address that uses both anycast and multicast . The default value is jms.topic . For more information about configuring a prefix to enable clients to specify a routing type when connecting to an address, see Section 4.6, "Adding a routing type to an acceptor configuration" . Additional resources For information about other parameters that you can configure for Netty network connections, see Appendix A, Acceptor and Connector Configuration Parameters . 3.2. Using AMQP with a network connection The broker supports the AMQP 1.0 specification. An AMQP link is a uni-directional protocol for messages between a source and a target, that is, a client and the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add or configure an acceptor to receive AMQP clients by including the protocols parameter with a value of AMQP as part of the URI, as shown in the following example: <acceptors> <acceptor name="amqp-acceptor">tcp://localhost:5672?protocols=AMQP</acceptor> ... </acceptors> In the preceding example, the broker accepts AMQP 1.0 clients on port 5672, which is the default AMQP port. An AMQP link has two endpoints, a sender and a receiver. When senders transmit a message, the broker converts it into an internal format, so it can be forwarded to its destination on the broker. Receivers connect to the destination at the broker and convert the messages back into AMQP before they are delivered. If an AMQP link is dynamic, a temporary queue is created and either the remote source or the remote target address is set to the name of the temporary queue. If the link is not dynamic, the address of the remote target or source is used for the queue. If the remote target or source does not exist, an exception is sent. A link target can also be a Coordinator, which is used to handle the underlying session as a transaction, either rolling it back or committing it. Note AMQP allows the use of multiple transactions per session, amqp:multi-txns-per-ssn , however the current version of AMQ Broker will support only single transactions per session. Note The details of distributed transactions (XA) within AMQP are not provided in the 1.0 version of the specification. If your environment requires support for distributed transactions, it is recommended that you use the AMQ Core Protocol JMS. See the AMQP 1.0 specification for more information about the protocol and its features. 3.2.1. Using an AMQP Link as a Topic Unlike JMS, the AMQP protocol does not include topics. However, it is still possible to treat AMQP consumers or receivers as subscriptions rather than just consumers on a queue. By default, any receiving link that attaches to an address with the prefix jms.topic. is treated as a subscription, and a subscription queue is created. The subscription queue is made durable or volatile, depending on how the Terminus Durability is configured, as captured in the following table: To create this kind of subscription for a multicast-only queue... Set Terminus Durability to this... Durable UNSETTLED_STATE or CONFIGURATION Non-durable NONE Note The name of a durable queue is composed of the container ID and the link name, for example my-container-id:my-link-name . AMQ Broker also supports the qpid-jms client and will respect its use of topics regardless of the prefix used for the address. 3.2.2. Configuring AMQP security The broker supports AMQP SASL Authentication. See Security for more information about how to configure SASL-based authentication on the broker. 3.3. Using MQTT with a network connection The broker supports MQTT v3.1.1 and v5.0 (and also the older v3.1 code message format). MQTT is a lightweight, client to server, publish/subscribe messaging protocol. MQTT reduces messaging overhead and network traffic, as well as a client's code footprint. For these reasons, MQTT is ideally suited to constrained devices such as sensors and actuators and is quickly becoming the de facto standard communication protocol for Internet of Things(IoT). Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an acceptor with the MQTT protocol enabled. For example: <acceptors> <acceptor name="mqtt">tcp://localhost:1883?protocols=MQTT</acceptor> ... </acceptors> MQTT comes with a number of useful features including: Quality of Service Each message can define a quality of service that is associated with it. The broker will attempt to deliver messages to subscribers at the highest quality of service level defined. Retained Messages Messages can be retained for a particular address. New subscribers to that address receive the last-sent retained message before any other messages, even if the retained message was sent before the client connected. Wild card subscriptions MQTT addresses are hierarchical, similar to the hierarchy of a file system. Clients are able to subscribe to specific topics or to whole branches of a hierarchy. Will Messages Clients are able to set a "will message" as part of their connect packet. If the client abnormally disconnects, the broker will publish the will message to the specified address. Other subscribers receive the will message and can react accordingly. For more information about the MQTT protocol, see the specification. MQTT 3.11 specification MQTT 5.0 specification 3.3.1. Configuring MQTT properties You can append key-value pairs to the MQTT acceptor to configure connection properties. For example: <acceptors> <acceptor name="mqtt">tcp://localhost:1883?protocols=MQTT;receiveMaximum=50000;topicAliasMaximum=50000;maximumPacketSize;134217728; serverKeepAlive=30;closeMqttConnectionOnPublishAuthorizationFailure=false</acceptor> ... </acceptors> receiveMaximum Enables flow-control by specifying the maximum number of QoS 1 and 2 messages that the broker can receive from a client before an acknowledgment is required. The default value is 65535 . A value of -1 disables flow-control from clients to the broker. This has the same effect as setting the value to 0 but reduces the size of the CONNACK packet. topicAliasMaximum Specifies for clients the maximum number of aliases that the broker supports. The default value is 65535 . A value of -1 prevents the broker from informing the client of a topic alias limit. This has the same effect as setting the value to 0, but reduces the size of the CONNACK packet. maximumPacketSize Specifies the maximum packet size that the broker can accept from clients. The default value is 268435455 . A value of -1 prevents the broker from informing the client of a maximum packet size, which means that no limit is enforced on the size of incoming packets. serverKeepAlive Specifies the duration the broker keeps an inactive client connection open. The configured value is applied to the connection only if it is less than the keep-alive value configured for the client or if the value configured for the client is 0. The default value is 60 seconds. A value of -1 means that the broker always accepts the client's keep alive value (even if that value is 0). closeMqttConnectionOnPublishAuthorizationFailure By default, if a PUBLISH packet fails due to a lack of authorization, the broker closes the network connection. If you want the broker to sent a positive acknowledgment instead of closing the network connection, set closeMqttConnectionOnPublishAuthorizationFailure to false . 3.4. Using OpenWire with a network connection The broker supports the OpenWire protocol , which allows a JMS client to talk directly to a broker. Use this protocol to communicate with older versions of AMQ Broker. Currently AMQ Broker supports OpenWire clients that use standard JMS APIs only. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add or modify an acceptor so that it includes OPENWIRE as part of the protocol parameter, as shown in the following example: <acceptors> <acceptor name="openwire-acceptor">tcp://localhost:61616?protocols=OPENWIRE</acceptor> ... </acceptors> In the preceding example, the broker will listen on port 61616 for incoming OpenWire commands. For more details, see the examples located under <install_dir> /examples/protocols/openwire . 3.5. Using STOMP with a network connection STOMP is a text-orientated wire protocol that allows STOMP clients to communicate with STOMP Brokers. The broker supports STOMP 1.0, 1.1 and 1.2. STOMP clients are available for several languages and platforms making it a good choice for interoperability. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Configure an existing acceptor or create a new one and include a protocols parameter with a value of STOMP , as below. <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP</acceptor> ... </acceptors> In the preceding example, the broker accepts STOMP connections on the port 61613 , which is the default. See the stomp example located under <install_dir> /examples/protocols for an example of how to configure a broker with STOMP. 3.5.1. STOMP limitations When using STOMP, the following limitations apply: The broker currently does not support virtual hosting, which means the host header in CONNECT frames are ignored. Message acknowledgments are not transactional. The ACK frame cannot be part of a transaction, and it is ignored if its transaction header is set). 3.5.2. Providing IDs for STOMP Messages When receiving STOMP messages through a JMS consumer or a QueueBrowser, the messages do not contain any JMS properties, for example JMSMessageID , by default. However, you can set a message ID on each incoming STOMP message by using a broker paramater. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Set the stompEnableMessageId parameter to true for the acceptor used for STOMP connections, as shown in the following example: <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP;stompEnableMessageId=true</acceptor> ... </acceptors> By using the stompEnableMessageId parameter, each stomp message sent using this acceptor has an extra property added. The property key is amq-message-id and the value is a String representation of an internal message id prefixed with "STOMP", as shown in the following example: If stompEnableMessageId is not specified in the configuration, the default value is false . 3.5.3. Setting a connection time to live STOMP clients must send a DISCONNECT frame before closing their connections. This allows the broker to close any server-side resources, such as sessions and consumers. However, if STOMP clients exit without sending a DISCONNECT frame, or if they fail, the broker will have no way of knowing immediately whether the client is still alive. STOMP connections therefore are configured to have a "Time to Live" (TTL) of 1 minute. The means that the broker stops the connection to the STOMP client if it has been idle for more than one minute. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the connectionTTL parameter to URI of the acceptor used for STOMP connections, as shown in the following example: <acceptors> <acceptor name="stomp-acceptor">tcp://localhost:61613?protocols=STOMP;connectionTTL=20000</acceptor> ... </acceptors> In the preceding example, any stomp connection that using the stomp-acceptor will have its TTL set to 20 seconds. Note Version 1.0 of the STOMP protocol does not contain any heartbeat frame. It is therefore the user's responsibility to make sure data is sent within connection-ttl or the broker will assume the client is dead and clean up server-side resources. With version 1.1, you can use heart-beats to maintain the life cycle of stomp connections. Overriding the broker default time to live As noted, the default TTL for a STOMP connection is one minute. You can override this value by adding the connection-ttl-override attribute to the broker configuration. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the connection-ttl-override attribute and provide a value in milliseconds for the new default. It belongs inside the <core> stanza, as below. <configuration ...> ... <core ...> ... <connection-ttl-override>30000</connection-ttl-override> ... </core> <configuration> In the preceding example, the default Time to Live (TTL) for a STOMP connection is set to 30 seconds, 30000 milliseconds. 3.5.4. Sending and consuming STOMP messages from JMS STOMP is mainly a text-orientated protocol. To make it simpler to interoperate with JMS, the STOMP implementation checks for presence of the content-length header to decide how to map a STOMP message to JMS. If you want a STOMP message to map to a ... The message should... . JMS TextMessage Not include a content-length header. JMS BytesMessage Include a content-length header. The same logic applies when mapping a JMS message to STOMP. A STOMP client can confirm the presence of the content-length header to determine the type of the message body (string or bytes). See the STOMP specification for more information about message headers. 3.5.5. Mapping STOMP destinations to AMQ Broker addresses and queues When sending messages and subscribing, STOMP clients typically include a destination header. Destination names are string values, which are mapped to a destination on the broker. In AMQ Broker, these destinations are mapped to addresses and queues . See the STOMP specification for more information about the destination frame. Take for example a STOMP client that sends the following message (headers and body included): In this case, the broker will forward the message to any queues associated with the address /my/stomp/queue . For example, when a STOMP client sends a message (by using a SEND frame), the specified destination is mapped to an address. It works the same way when the client sends a SUBSCRIBE or UNSUBSCRIBE frame, but in this case AMQ Broker maps the destination to a queue. In the preceding example, the broker will map the destination to the queue /other/stomp/queue . Mapping STOMP destinations to JMS destinations JMS destinations are also mapped to broker addresses and queues. If you want to use STOMP to send messages to JMS destinations, the STOMP destinations must follow the same convention: Send or subscribe to a JMS Queue by prepending the queue name by jms.queue. . For example, to send a message to the orders JMS Queue, the STOMP client must send the frame: Send or subscribe to a JMS Topic by prepending the topic name by jms.topic. . For example, to subscribe to the stocks JMS Topic, the STOMP client must send a frame similar to the following: | [
"<configuration> <core> <acceptors> <!-- All-protocols acceptor --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic --> <acceptor name=\"amqp\">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor> <!-- STOMP Acceptor --> <acceptor name=\"stomp\">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor> <!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. --> <acceptor name=\"hornetq\">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor> <!-- MQTT Acceptor --> <acceptor name=\"mqtt\">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor> </acceptors> </core> </configuration>",
"<acceptor name=\"ampq\">tcp://0.0.0.0:3232?protocols=AMQP</acceptor>",
"<acceptors> <acceptor name=\"amqp-acceptor\">tcp://localhost:5672?protocols=AMQP</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"mqtt\">tcp://localhost:1883?protocols=MQTT</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"mqtt\">tcp://localhost:1883?protocols=MQTT;receiveMaximum=50000;topicAliasMaximum=50000;maximumPacketSize;134217728; serverKeepAlive=30;closeMqttConnectionOnPublishAuthorizationFailure=false</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"openwire-acceptor\">tcp://localhost:61616?protocols=OPENWIRE</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP</acceptor> </acceptors>",
"<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP;stompEnableMessageId=true</acceptor> </acceptors>",
"amq-message-id : STOMP12345",
"<acceptors> <acceptor name=\"stomp-acceptor\">tcp://localhost:61613?protocols=STOMP;connectionTTL=20000</acceptor> </acceptors>",
"<configuration ...> <core ...> <connection-ttl-override>30000</connection-ttl-override> </core> <configuration>",
"SEND destination:/my/stomp/queue hello queue a ^@",
"SUBSCRIBE destination: /other/stomp/queue ack: client ^@",
"SEND destination:jms.queue.orders hello queue orders ^@",
"SUBSCRIBE destination:jms.topic.stocks ^@"
]
| https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/protocols |
24.2.3. Environment Variables | 24.2.3. Environment Variables Use the Environment tab to configure options for specific variables to set, pass, or unset for CGI scripts. Sometimes it is necessary to modify environment variables for CGI scripts or server-side include (SSI) pages. The Apache HTTP Server can use the mod_env module to configure the environment variables which are passed to CGI scripts and SSI pages. Use the Environment Variables page to configure the directives for this module. Use the Set for CGI Scripts section to set an environment variable that is passed to CGI scripts and SSI pages. For example, to set the environment variable MAXNUM to 50 , click the Add button inside the Set for CGI Script section, as shown in Figure 24.5, "Environment Variables" , and type MAXNUM in the Environment Variable text field and 50 in the Value to set text field. Click OK to add it to the list. The Set for CGI Scripts section configures the SetEnv directive. Use the Pass to CGI Scripts section to pass the value of an environment variable when the server is first started to CGI scripts. To see this environment variable, type the command env at a shell prompt. Click the Add button inside the Pass to CGI Scripts section and enter the name of the environment variable in the resulting dialog box. Click OK to add it to the list. The Pass to CGI Scripts section configures the PassEnv directive. Figure 24.5. Environment Variables To remove an environment variable so that the value is not passed to CGI scripts and SSI pages, use the Unset for CGI Scripts section. Click Add in the Unset for CGI Scripts section, and enter the name of the environment variable to unset. Click OK to add it to the list. This corresponds to the UnsetEnv directive. To edit any of these environment values, select it from the list and click the corresponding Edit button. To delete any entry from the list, select it and click the corresponding Delete button. To learn more about environment variables in the Apache HTTP Server, refer to the following: | [
"http://httpd.apache.org/docs-2.0/env.html"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/default_settings-environment_variables |
function::isinstr | function::isinstr Name function::isinstr - Returns whether a string is a substring of another string Synopsis Arguments s1 string to search in s2 substring to find Description This function returns 1 if string s1 contains s2 , otherwise zero. | [
"isinstr:long(s1:string,s2:string)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-isinstr |
11.11. Rebalancing Volumes | 11.11. Rebalancing Volumes If a volume has been expanded or shrunk using the add-brick or remove-brick commands, the data on the volume needs to be rebalanced among the servers. Note In a non-replicated volume, all bricks should be online to perform the rebalance operation using the start option. In a replicated volume, at least one of the bricks in the replica should be online. To rebalance a volume, use the following command on any of the servers: For example: When run without the force option, the rebalance command attempts to balance the space utilized across nodes. Files whose migration would cause the target node to have less available space than the source node are skipped. This results in linkto files being retained, which may cause slower access when a large number of linkto files are present. Red Hat strongly recommends you to disconnect all the older clients before executing the rebalance command to avoid a potential data loss scenario. Warning The Rebalance command can be executed with the force option even when the older clients are connected to the cluster. However, this could lead to a data loss situation. A rebalance operation with force , balances the data based on the layout, and hence optimizes or does away with the link files, but may lead to an imbalanced storage space used across bricks. This option is to be used only when there are a large number of link files in the system. To rebalance a volume forcefully, use the following command on any of the servers: For example: 11.11.1. Rebalance Throttling The rebalance process uses multiple threads to ensure good performance during migration of multiple files. During multiple file migration, there can be a severe impact on storage system performance and a throttling mechanism is provided to manage it. By default, the rebalance throttling is started in the normal mode. Configure the throttling modes to adjust the rate at which the files must be migrated For example: 11.11.2. Displaying Rebalance Progress To display the status of a volume rebalance operation, use the following command: For example: A rebalance operation starts a rebalance process on each node of the volume. Each process is responsible for rebalancing the files on its own individual node. Each row of the rebalance status output describes the progress of the operation on a single node. Important If there is a reboot while rebalancing the rebalance output might display an incorrect status and some files might not get rebalanced. Workaround: After the reboot, once the rebalance is completed, trigger another rebalance so that the files that were not balanced during the reboot are now rebalanced correctly, and the rebalance output gives the correct status. The following table describes the output of the rebalance status command: Table 11.2. Rebalance Status Output Description Property Name Description Node The name of the node. Rebalanced-files The number of files that were successfully migrated. size The total size of the files that were migrated. scanned The number of files scanned on the node. This includes the files that were migrated. failures The number of files that could not be migrated because of errors. skipped The number of files which were skipped because of various errors or reasons. status The status of the rebalance operation on the node is in progress , completed , or failed . run time in h:m:s The amount of time for which the process has been running on the node. The estimated time left for the rebalance to complete on all nodes is also displayed. The estimated time to complete is displayed only after the rebalance operation has been running for 10 minutes. In cases where the remaining time is extremely large, the estimated time to completion is displayed as >2 months and the user is advised to check again later. The time taken to complete a rebalance operation depends on the number of files estimated to be on the bricks and the rate at which files are being processed by the rebalance process. This value is recalculated every time the rebalance status command is executed and becomes more accurate the longer rebalance has been running, and for large data sets. The calculation assumes that a file system partition contains a single brick. A rebalance balance operation is considered complete when the status of every node is completed . For example: With this release, details about the files that are skipped during rebalance operation can be obtained. Entries of all such files are available in the rebalance log with the message ID 109126. You can search for the message ID from the log file and get the list of all the skipped files: For example: To know more about the failed files, search for 'migrate-data failed' in the rebalance.log file. However, the count for rebalance failed files will not match with "migrate-data failed" in the rebalance.log because the failed count includes all possible failures and just not file migration. 11.11.3. Stopping a Rebalance Operation To stop a rebalance operation, use the following command: For example: | [
"gluster volume rebalance VOLNAME start",
"gluster volume rebalance test-volume start Starting rebalancing on volume test-volume has been successful",
"gluster volume rebalance VOLNAME start force",
"gluster volume rebalance test-volume start force Starting rebalancing on volume test-volume has been successful",
"gluster volume set VOLNAME rebal-throttle lazy|normal|aggressive",
"gluster volume set test-volume rebal-throttle lazy",
"gluster volume rebalance VOLNAME status",
"gluster volume rebalance test-volume status Node Rebalanced size scanned failures skipped status run time -files in h:m:s ------------- ---------- ------ ------- -------- ------- ----------- -------- localhost 71962 70.3GB 380852 0 0 in progress 2:02:20 server1 70489 68.8GB 502185 0 0 in progress 2:02:20 server2 70704 69.0GB 507728 0 0 in progress 2:02:20 server3 71819 70.1GB 435611 0 0 in progress 2:02:20 Estimated time left for rebalance to complete : 2:50:24",
"gluster volume rebalance test-volume status Node Rebalanced size scanned failures skipped status run time -files in h:m:s ---------- ---------- ----- ------- -------- ------- ----------- -------- node2 0 0Bytes 0 0 0 completed 0:02:23 node3 234 737.8KB 3350 0 257 completed 0:02:25 node4 3 14.6K 71 0 6 completed 0:00:02 localhost 317 1.1MB 3484 0 155 completed 0:02:38 volume rebalance: test-volume: success",
"grep -i 109126 /var/log/glusterfs/test-volume-rebalance.log [2018-03-15 09:14:30.203393] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/ABI/stable/sysfs-fs-orangefs. [2018-03-15 09:14:31.262969] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/ABI/stable/sysfs-devices. [2018-03-15 09:14:31.842631] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/ABI/stable/sysfs-devices-system-cpu. [2018-03-15 09:14:33.733728] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/ABI/testing/sysfs-bus-fcoe. [2018-03-15 09:14:35.576404] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/ABI/testing/sysfs-bus-iio-frequency-ad9523. [2018-03-15 09:14:43.378480] I [MSGID: 109126] [dht-rebalance.c:2715:gf_defrag_migrate_single_file] 0-test-volume-dht: File migration skipped for /linux-4.9.27/Documentation/DocBook/kgdb.tmpl.",
"gluster volume rebalance VOLNAME stop",
"gluster volume rebalance test-volume stop Node Rebalanced size scanned failures skipped status run time -files in h:m:s ------------- ---------- ------- ------- -------- ------- ----------- -------- localhost 106504 104.0GB 558111 0 0 stopped 3:02:24 server1 102299 99.9GB 725239 0 0 stopped 3:02:24 server2 102264 99.9GB 737364 0 0 stopped 3:02:24 server3 106813 104.3GB 646581 0 0 stopped 3:02:24 Estimated time left for rebalance to complete : 2:06:38"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-rebalancing_volumes |
Chapter 6. Getting Started with nftables | Chapter 6. Getting Started with nftables The nftables framework provides packet classification facilities and it is the designated successor to the iptables , ip6tables , arptables , ebtables , and ipset tools. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: built-in lookup tables instead of linear processing a single framework for both the IPv4 and IPv6 protocols rules all applied atomically instead of fetching, updating, and storing a complete rule set support for debugging and tracing in the rule set ( nftrace ) and monitoring trace events (in the nft tool) more consistent and compact syntax, no protocol-specific extensions a Netlink API for third-party applications Similarly to iptables , nftables use tables for storing chains. The chains contain individual rules for performing actions. The nft tool replaces all tools from the packet-filtering frameworks. The libnftnl library can be used for low-level interaction with nftables Netlink API over the libmnl library. To display the effect of rule set changes, use the nft list ruleset command. Since these tools add tables, chains, rules, sets, and other objects to the nftables rule set, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly separate legacy commands. When to use firewalld or nftables firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance critical firewalls, such as for a whole network. Important To avoid that the different firewall services influence each other, run only one of them on a RHEL host, and disable the other services. 6.1. Writing and executing nftables scripts The nftables framework provides a native scripting environment that brings a major benefit over using shell scripts to maintain firewall rules: the execution of scripts is atomic. This means that the system either applies the whole script or prevents the execution if an error occurs. This guarantees that the firewall is always in a consistent state. Additionally, the nftables script environment enables administrators to: add comments define variables include other rule set files This section explains how to use these features, as well as creating and executing nftables scripts. When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in the /etc/nftables/ directory. These scripts contain commands that create tables and empty chains for different purposes. 6.1.1. Supported nftables script formats The nftables scripting environment supports scripts in the following formats: You can write a script in the same format as the nft list ruleset command displays the rule set: You can use the same syntax for commands as in nft commands: 6.1.2. Running nftables scripts You can run nftables script either by passing it to the nft utility or execute the script directly. Prerequisites The procedure of this section assumes that you stored an nftables script in the /etc/nftables/example_firewall.nft file. Procedure 6.1. Running nftables scripts using the nft utility To run an nftables script by passing it to the nft utility, enter: Procedure 6.2. Running the nftables script directly: Steps that are required only once: Ensure that the script starts with the following shebang sequence: Important If you omit the -f parameter, the nft utility does not read the script and displays: Error: syntax error, unexpected newline, expecting string. Optional: Set the owner of the script to root : Make the script executable for the owner: Run the script: If no output is displayed, the system executed the script successfully. Important Even if nft executes the script successfully, incorrectly placed rules, missing parameters, or other problems in the script can cause that the firewall behaves not as expected. Additional resources For details about setting the owner of a file, see the chown(1) man page. For details about setting permissions of a file, see the chmod(1) man page. For more information about loading nftables rules with system boot, see Section 6.1.6, "Automatically loading nftables rules when the system boots" 6.1.3. Using comments in nftables scripts The nftables scripting environment interprets everything to the right of a # character as a comment. Example 6.1. Comments in an nftables script Comments can start at the beginning of a line, as well as to a command: 6.1.4. Using variables in an nftables script To define a variable in an nftables script, use the define keyword. You can store single values and anonymous sets in a variable. For more complex scenarios, use named sets or verdict maps. Variables with a single value The following example defines a variable named INET_DEV with the value enp1s0 : You can use the variable in the script by writing the USD sign followed by the variable name: Variables that contain an anonymous set The following example defines a variable that contains an anonymous set: You can use the variable in the script by writing the USD sign followed by the variable name: Note Note that curly braces have special semantics when you use them in a rule because they indicate that the variable represents a set. Additional resources For more information about sets, see Section 6.4, "Using sets in nftables commands" . For more information about verdict maps, see Section 6.5, "Using verdict maps in nftables commands" . 6.1.5. Including files in an nftables script The nftables scripting environment enables administrators to include other scripts by using the include statement. If you specify only a file name without an absolute or relative path, nftables includes files from the default search path, which is set to /etc on Red Hat Enterprise Linux. Example 6.2. Including files from the default search directory To include a file from the default search directory: Example 6.3. Including all *.nft files from a directory To include all files ending in *.nft that are stored in the /etc/nftables/rulesets/ directory: Note that the include statement does not match files beginning with a dot. Additional resources For further details, see the Include files section in the nft(8) man page. 6.1.6. Automatically loading nftables rules when the system boots The nftables systemd service loads firewall scripts that are included in the /etc/sysconfig/nftables.conf file. This section explains how to load firewall rules when the system boots. Prerequisites The nftables scripts are stored in the /etc/nftables/ directory. Procedure 6.3. Automatically loading nftables rules when the system boots Edit the /etc/sysconfig/nftables.conf file. If you enhance *.nft scripts created in /etc/nftables/ when you installed the nftables package, uncomment the include statement for these scripts. If you write scripts from scratch, add include statements to include these scripts. For example, to load the /etc/nftables/example.nft script when the nftables service starts, add: Optionally, start the nftables service to load the firewall rules without rebooting the system: Enable the nftables service. Additional resources For more information, see Section 6.1.1, "Supported nftables script formats" | [
"#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }",
"#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept",
"nft -f /etc/nftables/example_firewall.nft",
"#!/usr/sbin/nft -f",
"chown root /etc/nftables/ example_firewall.nft",
"chmod u+x /etc/nftables/ example_firewall.nft",
"/etc/nftables/ example_firewall.nft",
"Flush the rule set flush ruleset add table inet example_table # Create a table",
"define INET_DEV = enp1s0",
"add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept",
"define DNS_SERVERS = { 192.0.2.1, 192.0.2.2 }",
"add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept",
"include \"example.nft\"",
"include \"/etc/nftables/rulesets/*.nft\"",
"include \"/etc/nftables/example.nft\"",
"systemctl start nftables",
"systemctl enable nftables"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/chap-getting_started_with_nftables |
5.5. Creating Replicated Volumes | 5.5. Creating Replicated Volumes Replicated volume creates copies of files across multiple bricks in the volume. Use replicated volumes in environments where high-availability and high-reliability are critical. Use gluster volume create to create different types of volumes, and gluster volume info to verify successful volume creation. Prerequisites A trusted storage pool has been created, as described in Section 4.1, "Adding Servers to the Trusted Storage Pool" . Understand how to start and stop volumes, as described in Section 5.10, "Starting Volumes" . Warning Red Hat no longer recommends the use of two-way replication without arbiter bricks as Two-way replication without arbiter bricks is deprecated with Red Hat Gluster Storage 3.4 and no longer supported. This change affects both replicated and distributed-replicated volumes that do not use arbiter bricks. Two-way replication without arbiter bricks is being deprecated because it does not provide adequate protection from split-brain conditions. Even in distributed-replicated configurations, two-way replication cannot ensure that the correct copy of a conflicting file is selected without the use of a tie-breaking node. While a dummy node can be used as an interim solution for this problem, Red Hat strongly recommends that all volumes that currently use two-way replication without arbiter bricks are migrated to use either arbitrated replication or three-way replication. Instructions for migrating a two-way replicated volume without arbiter bricks to an arbitrated replicated volume are available in the 5.7.5. Converting to an arbitrated volume . Information about three-way replication is available in Section 5.5.1, "Creating Three-way Replicated Volumes" and Section 5.6.1, "Creating Three-way Distributed Replicated Volumes" . 5.5.1. Creating Three-way Replicated Volumes Three-way replicated volume creates three copies of files across multiple bricks in the volume. The number of bricks must be equal to the replica count for a replicated volume. To protect against server and disk failures, it is recommended that the bricks of the volume are from different servers. Synchronous three-way replication is now fully supported in Red Hat Gluster Storage. It is recommended that three-way replicated volumes use JBOD, but use of hardware RAID with three-way replicated volumes is also supported. Figure 5.2. Illustration of a Three-way Replicated Volume Creating three-way replicated volumes Run the gluster volume create command to create the replicated volume. The syntax is # gluster volume create NEW-VOLNAME [replica COUNT ] [transport tcp | rdma (Deprecated) | tcp,rdma] NEW-BRICK... The default value for transport is tcp . Other options can be passed such as auth.allow or auth.reject . See Section 11.1, "Configuring Volume Options" for a full list of parameters. Example 5.3. Replicated Volume with Three Storage Servers The order in which bricks are specified determines how bricks are replicated with each other. For example, every n bricks, where 3 is the replica count forms a replica set. This is illustrated in Figure 5.2, "Illustration of a Three-way Replicated Volume" . Run # gluster volume start VOLNAME to start the volume. Run gluster volume info command to optionally display the volume information. Important By default, the client-side quorum is enabled on three-way replicated volumes to minimize split-brain scenarios. For more information on client-side quorum, see Section 11.15.1.2, "Configuring Client-Side Quorum" 5.5.2. Creating Sharded Replicated Volumes Sharding breaks files into smaller pieces so that they can be distributed across the bricks that comprise a volume. This is enabled on a per-volume basis. When sharding is enabled, files written to a volume are divided into pieces. The size of the pieces depends on the value of the volume's features.shard-block-size parameter. The first piece is written to a brick and given a GFID like a normal file. Subsequent pieces are distributed evenly between bricks in the volume (sharded bricks are distributed by default), but they are written to that brick's .shard directory, and are named with the GFID and a number indicating the order of the pieces. For example, if a file is split into four pieces, the first piece is named GFID and stored normally. The other three pieces are named GFID.1, GFID.2, and GFID.3 respectively. They are placed in the .shard directory and distributed evenly between the various bricks in the volume. Because sharding distributes files across the bricks in a volume, it lets you store files with a larger aggregate size than any individual brick in the volume. Because the file pieces are smaller, heal operations are faster, and geo-replicated deployments can sync the small pieces of a file that have changed, rather than syncing the entire aggregate file. Sharding also lets you increase volume capacity by adding bricks to a volume in an ad-hoc fashion. 5.5.2.1. Supported use cases Sharding has one supported use case: in the context of providing Red Hat Gluster Storage as a storage domain for Red Hat Enterprise Virtualization, to provide storage for live virtual machine images. Note that sharding is also a requirement for this use case, as it provides significant performance improvements over implementations. Important Quotas are not compatible with sharding. Important Sharding is supported in new deployments only, as there is currently no upgrade path for this feature. Example 5.4. Example: Three-way replicated sharded volume Set up a three-way replicated volume, as described in the Red Hat Gluster Storage Administration Guide : https://access.redhat.com/documentation/en-US/red_hat_gluster_storage/3.5/html/Administration_Guide/sect-Creating_Replicated_Volumes.html#Creating_Three-way_Replicated_Volumes . Before you start your volume, enable sharding on the volume. Start the volume and ensure it is working as expected. 5.5.2.2. Configuration Options Sharding is enabled and configured at the volume level. The configuration options are as follows. features.shard Enables or disables sharding on a specified volume. Valid values are enable and disable . The default value is disable . Note that this only affects files created after this command is run; files created before this command is run retain their old behaviour. features.shard-block-size Specifies the maximum size of the file pieces when sharding is enabled. The supported value for this parameter is 512MB. Note that this only affects files created after this command is run; files created before this command is run retain their old behaviour. 5.5.2.3. Finding the pieces of a sharded file When you enable sharding, you might want to check that it is working correctly, or see how a particular file has been sharded across your volume. To find the pieces of a file, you need to know that file's GFID. To obtain a file's GFID, run: Once you have the GFID, you can run the following command on your bricks to see how this file has been distributed: | [
"gluster v create glutervol data replica 3 transport tcp server1:/rhgs/brick1 server2:/rhgs/brick2 server3:/rhgs/brick3 volume create: glutervol: success: please start the volume to access",
"gluster v start glustervol volume start: glustervol: success",
"gluster volume set test-volume features.shard enable",
"gluster volume test-volume start gluster volume info test-volume",
"gluster volume set volname features.shard enable",
"gluster volume set volname features.shard-block-size 32MB",
"getfattr -d -m. -e hex path_to_file",
"ls /rhgs/*/.shard -lh | grep GFID"
]
| https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-creating_replicated_volumes |
Part VII. Kernel Customization with Bootloader | Part VII. Kernel Customization with Bootloader This part describes how to use the GRUB 2 bootloader to assist administrators with kernel customization. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-kernel_customization_with_bootloader |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/publishing_proprietary_content_collections_in_automation_hub/making-open-source-more-inclusive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/storage_guide/making-open-source-more-inclusive |
Chapter 3. Role-Based Parameters | Chapter 3. Role-Based Parameters You can modify the bevavior of specific overcloud composable roles with overcloud role-based parameters. Substitute _ROLE_ with the name of the role. For example, for _ROLE_Count use ControllerCount . Parameter Description _ROLE_AnyErrorsFatal Sets the any_errors_fatal value when running config-download Ansible playbooks. The default value is yes . _ROLE_ControlPlaneSubnet Name of the subnet on ctlplane network for this role. The default value is ctlplane-subnet . _ROLE_Count The number of nodes to deploy in a role. The default value is 1 . _ROLE_ExtraConfig Role specific additional hiera configuration to inject into the cluster. _ROLE_ExtraGroupVars Optional extra Ansible group vars. _ROLE_HostnameFormat Format for node hostnames. Note that %index% is translated into the index of the node (e.g 0/1/2) and %stackname% is replaced with the stack name (e.g overcloud ). The default value is %stackname%-_role_-%index% . _ROLE_LocalMtu MTU to use for the Undercloud local_interface. The default value is 1500 . _ROLE_MaxFailPercentage Sets the max_fail_percentage value when running config-download Ansible playbooks. The default value is 0 . _ROLE_NetConfigOverride Custom JSON data to be used to override the os-net-config config. This is meant to be used by net_config_override parameter in tripleoclient to provide an easy means to pass in custom net configs for the Undercloud. _ROLE_NetworkConfigTemplate ROLE NetworkConfig Template. _ROLE_NetworkConfigUpdate When set to "True", existing networks will be updated on the overcloud. This parameter replaces the functionality previously provided by NetworkDeploymentActions. Defaults to "False" so that only new nodes will have their networks configured. This is a role based parameter. The default value is False . _ROLE_Parameters Optional Role Specific parameters to be provided to service. _ROLE_RemovalPolicies List of resources to be removed from the role's ResourceGroup when doing an update that requires removal of specific resources. _ROLE_RemovalPoliciesMode How to handle change to RemovalPolicies for ROLE ResourceGroup when doing an update. Default mode append will append to the existing blocklist and update would replace the blocklist. The default value is append . _ROLE_SchedulerHints Optional scheduler hints to pass to OpenStack Compute (nova). _ROLE_ServiceNetMap Role specific ServiceNetMap overrides, the map provided will be merged with the global ServiceNetMap when passing the ServiceNetMap to the ROLE_ServiceChain resource and the _ROLE resource group. For example: _ROLE_ServiceNetMap: NovaLibvirtNetwork: internal_api_leaf2. _ROLE_Services A list of service resources (configured in the OpenStack Orchestration (heat) resource_registry) which represent nested stacks for each service that should get installed on the ROLE role. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/ref_role-based-parameters_overcloud_parameters |
Chapter 2. OpenStack networking concepts | Chapter 2. OpenStack networking concepts OpenStack Networking has system services to manage core services such as routing, DHCP, and metadata. Together, these services are included in the concept of the Controller node, which is a conceptual role assigned to a physical server. A physical server is typically assigned the role of Network node and dedicated to the task of managing Layer 3 routing for network traffic to and from instances. In OpenStack Networking, you can have multiple physical hosts performing this role, allowing for redundant service in the event of hardware failure. For more information, see the chapter on Layer 3 High Availability . Note Red Hat OpenStack Platform 11 added support for composable roles, allowing you to separate network services into a custom role. However, for simplicity, this guide assumes that a deployment uses the default controller role. 2.1. Installing OpenStack Networking (neutron) The OpenStack Networking component is installed as part of a Red Hat OpenStack Platform director deployment. For more information about director deployment, see Director Installation and Usage . 2.2. OpenStack Networking diagram This diagram depicts a sample OpenStack Networking deployment, with a dedicated OpenStack Networking node performing layer 3 routing and DHCP, and running the advanced services firewall as a service (FWaaS) and load balancing as a Service (LBaaS). Two Compute nodes run the Open vSwitch (openvswitch-agent) and have two physical network cards each, one for project traffic, and another for management connectivity. The OpenStack Networking node has a third network card specifically for provider traffic: 2.3. Security groups Security groups and rules filter the type and direction of network traffic that neutron ports send and receive. This provides an additional layer of security to complement any firewall rules present on the compute instance. The security group is a container object with one or more security rules. A single security group can manage traffic to multiple compute instances. Ports created for floating IP addresses, OpenStack Networking LBaaS VIPs, and instances are associated with a security group. If you do not specify a security group, then the port is associated with the default security group. By default, this group drops all inbound traffic and allows all outbound traffic. However, traffic flows between instances that are members of the default security group, because the group has a remote group ID that points to itself. To change the filtering behavior of the default security group, you can add security rules to the group, or create entirely new security groups. 2.4. Open vSwitch Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software bridge. OVS provides switching services to virtualized networks with support for industry standard , OpenFlow, and sFlow. OVS can also integrate with physical switches using layer 2 features, such as STP, LACP, and 802.1Q VLAN tagging. Open vSwitch version 1.11.0-1.el6 or later also supports tunneling with VXLAN and GRE. For more information about network interface bonds, see the Network Interface Bonding chapter of the Advanced Overcloud Customization guide. Note To mitigate the risk of network loops in OVS, only a single interface or a single bond can be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges. 2.5. Modular layer 2 (ML2) networking ML2 is the OpenStack Networking core plug-in introduced in the OpenStack Havana release. Superseding the model of monolithic plug-ins, the ML2 modular design enables the concurrent operation of mixed network technologies. The monolithic Open vSwitch and Linux Bridge plug-ins have been deprecated and removed; their functionality is now implemented by ML2 mechanism drivers. Note ML2 is the default OpenStack Networking plug-in, with OVN configured as the default mechanism driver. 2.5.1. The reasoning behind ML2 Previously, OpenStack Networking deployments could use only the plug-in selected at implementation time. For example, a deployment running the Open vSwitch (OVS) plug-in was required to use the OVS plug-in exclusively. The monolithic plug-in did not support the simultaneously use of another plug-in such as linuxbridge. This limitation made it difficult to meet the needs of environments with heterogeneous requirements. 2.5.2. ML2 network types Multiple network segment types can be operated concurrently. In addition, these network segments can interconnect using ML2 support for multi-segmented networks. Ports are automatically bound to the segment with connectivity; it is not necessary to bind ports to a specific segment. Depending on the mechanism driver, ML2 supports the following network segment types: flat GRE local VLAN VXLAN Geneve Enable Type drivers in the ML2 section of the ml2_conf.ini file. For example: 2.5.3. ML2 mechanism drivers Plug-ins are now implemented as mechanisms with a common code base. This approach enables code reuse and eliminates much of the complexity around code maintenance and testing. Note For the list of supported mechanism drivers, see Release Notes . The default mechanism driver is OVN. Mechanism drivers are enabled in the ML2 section of the ml2_conf.ini file. For example: Note Red Hat OpenStack Platform director manages these settings. Do not change them manually. 2.6. ML2 type and mechanism driver compatibility Mechanism Driver Type Driver flat gre vlan vxlan geneve ovn yes no yes no yes openvswitch yes yes yes yes no 2.7. Limits of the ML2/OVN mechanism driver 2.7.1. No supported ML2/OVS to ML2/OVN migration method in this release This release of the Red Hat OpenStack Platform (RHOSP) does not provide a supported migration from the ML2/OVS mechanism driver to the ML2/OVN mechanism driver. This RHOSP release does not support the OpenStack community migration strategy. Migration support is planned for a future RHOSP release. To track the progress of migration support, see https://bugzilla.redhat.com/show_bug.cgi?id=1862888 . 2.7.2. ML2/OVS features not yet supported by ML2/OVN Feature Notes Track this Feature Fragmentation / Jumbo Frames OVN does not yet support sending ICMP "fragmentation needed" packets. Larger ICMP/UDP packets that require fragmentation do not work with ML2/OVN as they would with the ML2/OVS driver implementation. TCP traffic is handled by maximum segment sized (MSS) clamping. https://bugzilla.redhat.com/show_bug.cgi?id=1547074 (ovn-network) https://bugzilla.redhat.com/show_bug.cgi?id=1702331 (Core ovn) Port Forwarding OVN does not support port forwarding. https://bugzilla.redhat.com/show_bug.cgi?id=1654608 https://blueprints.launchpad.net/neutron/+spec/port-forwarding Security Groups Logging API ML2/OVN does not provide a log file that logs security group events such as an instance trying to execute restricted operations or access restricted ports in remote servers. https://bugzilla.redhat.com/show_bug.cgi?id=1619266 Multicast When using ML2/OVN as the integration bridge, multicast traffic is treated as broadcast traffic. The integration bridge operates in FLOW mode, so IGMP snooping is not available. To support this, core OVN must support IGMP snooping. https://bugzilla.redhat.com/show_bug.cgi?id=1672278 SR-IOV Presently, SR-IOV only works with the neutron DHCP agent deployed. https://bugzilla.redhat.com/show_bug.cgi?id=1666684 Provisioning Baremetal Machines with OVN DHCP The built-in DHCP server on OVN presently can not provision baremetal nodes. It cannot serve DHCP for the provisioning networks. Chainbooting iPXE requires tagging ( --dhcp-match in dnsmasq), which is not supported in the OVN DHCP server. https://bugzilla.redhat.com/show_bug.cgi?id=1622154 OVS_DPDK OVS_DPDK is presently not supported with OVN. 2.8. Using the ML2/OVS mechanism driver instead of the default ML2/OVN driver If your application requires the ML2/OVS mechanism driver, you can deploy the overcloud with the environment file neutron-ovs.yaml , which disables the default ML2/OVN mechanism driver and enables ML2/OVS. 2.8.1. Using ML2/OVS in a new RHOSP 16.0 deployment In the overcloud deployment command, include the environment file neutron-ovs.yaml as shown in the following example. For more information about using environment files, see Including Environment Files in Overcloud Creation in the Advanced Overcloud Customization guide. 2.8.2. Upgrading from ML2/OVS in a RHOSP to ML2/OVS in RHOSP 16.0 To keep using ML2/OVS after an upgrade from a version of RHOSP that uses ML2/OVS, follow Red Hat's upgrade procedure as documented, and do not perform the ML2/OVS-to-ML2/OVN migration. The upgrade procedure includes adding -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml to the overcloud deployment command. 2.9. Configuring the L2 population driver The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including those that do not host the destination network. This design requires the acceptance of significant network and processing overhead. The alternative design introduced by the L2 Population driver implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a particular network only between the nodes that host the network. This traffic is sent only to the necessary agent by encapsulating it as a targeted unicast. To enable the L2 Population driver, complete the following steps: 1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at least one tunneling driver enabled; either GRE, VXLAN, or both. Add the appropriate configuration options to the ml2_conf.ini file: Note Neutron's Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is recommended by Red Hat for general usage. 2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2 agent: Note To install ARP reply flows, configure the arp_responder flag: 2.10. OpenStack Networking services By default, Red Hat OpenStack Platform includes components that integrate with the ML2 and Open vSwitch plugin to provide networking functionality in your deployment: 2.10.1. L3 agent The L3 agent is part of the openstack-neutron package. Use network namespaces to provide each project with its own isolated layer 3 routers, which direct traffic and provide gateway services for the layer 2 networks. The L3 agent assists with managing these routers. The nodes that host the L3 agent must not have a manually-configured IP address on a network interface that is connected to an external network. Instead there must be a range of IP addresses from the external network that are available for use by OpenStack Networking. Neutron assigns these IP addresses to the routers that provide the link between the internal and external networks. The IP range that you select must be large enough to provide a unique IP address for each router in the deployment as well as each floating IP. 2.10.2. DHCP agent The OpenStack Networking DHCP agent manages the network namespaces that are spawned for each project subnet to act as DHCP server. Each namespace runs a dnsmasq process that can allocate IP addresses to virtual machines on the network. If the agent is enabled and running when a subnet is created then by default that subnet has DHCP enabled. 2.10.3. Open vSwitch agent The Open vSwitch (OVS) neutron plug-in uses its own agent, which runs on each node and manages the OVS bridges. The ML2 plugin integrates with a dedicated agent to manage L2 networks. By default, Red Hat OpenStack Platform uses ovs-agent , which builds overlay networks using OVS bridges. 2.11. Project and provider networks The following diagram presents an overview of the project and provider network types, and illustrates how they interact within the overall OpenStack Networking topology: 2.11.1. Project networks Users create project networks for connectivity within projects. Project networks are fully isolated by default and are not shared with other projects. OpenStack Networking supports a range of project network types: Flat - All instances reside on the same network, which can also be shared with the hosts. No VLAN tagging or other network segregation occurs. VLAN - OpenStack Networking allows users to create multiple provider or project networks using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to communicate with each other across the environment. They can also communicate with dedicated servers, firewalls, load balancers and other network infrastructure on the same layer 2 VLAN. VXLAN and GRE tunnels - VXLAN and GRE use network overlays to support private communication between instances. An OpenStack Networking router is required to enable traffic to traverse outside of the GRE or VXLAN project network. A router is also required to connect directly-connected project networks with external networks, including the Internet; the router provides the ability to connect to instances directly from an external network using floating IP addresses. VXLAN and GRE type drivers are compatible with the ML2/OVS mechanism driver. GENEVE tunnels - GENEVE recognizes and accommodates changing capabilities and needs of different devices in network virtualization. It provides a framework for tunneling rather than being prescriptive about the entire system. Geneve defines the content of the metadata flexibly that is added during encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast, and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver. Note You can configure QoS policies for project networks. For more information, see Chapter 10, Configuring Quality of Service (QoS) policies . 2.11.2. Provider networks The OpenStack administrator creates provider networks. Provider networks map directly to an existing physical network in the data center. Useful network types in this category include flat (untagged) and VLAN (802.1Q tagged). You can also share provider networks among projects as part of the network creation process. 2.11.2.1. Flat provider networks You can use flat provider networks to connect instances directly to the external network. This is useful if you have multiple physical networks (for example, physnet1 and physnet2 ) and separate physical interfaces ( eth0 physnet1 and eth1 physnet2 ), and intend to connect each Compute and Network node to those external networks. To use multiple vlan-tagged interfaces on a single interface to connect to multiple provider networks, see Section 7.3, "Using VLAN provider networks" . 2.11.2.2. Configuring networking for Controller nodes 1. Edit /etc/neutron/plugin.ini (symbolic link to /etc/neutron/plugins/ml2/ml2_conf.ini ) to add flat to the existing list of values and set flat_networks to * . For example: 2. Create an external network as a flat network and associate it with the configured physical_network . Configure it as a shared network (using --share ) to let other users create instances that connect to the external network directly. 3. Create a subnet using the openstack subnet create command, or the dashboard. For example: 4. Restart the neutron-server service to apply the change: 2.11.2.3. Configuring networking for the Network and Compute nodes Complete the following steps on the Network node and Compute nodes to connect the nodes to the external network, and allow instances to communicate directly with the external network. 1. Create an external network bridge (br-ex) and add an associated port (eth1) to it: Create the external bridge in /etc/sysconfig/network-scripts/ifcfg-br-ex : In /etc/sysconfig/network-scripts/ifcfg-eth1 , configure eth1 to connect to br-ex : Reboot the node or restart the network service for the changes to take effect. 2. Configure physical networks in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map bridges to the physical network: Note For more information on bridge mappings, see Chapter 11, Configuring bridge mappings . 3. Restart the neutron-openvswitch-agent service on both the network and compute nodes to apply the changes: 2.11.2.4. Configuring the Network node 1. Set external_network_bridge = to an empty value in /etc/neutron/l3_agent.ini : Setting external_network_bridge = to an empty value allows multiple external network bridges. OpenStack Networking creates a patch from each bridge to br-int . 2. Restart neutron-l3-agent for the changes to take effect. Note If there are multiple flat provider networks, then each of them must have a separate physical interface and bridge to connect them to the external network. Configure the ifcfg-* scripts appropriately and use a comma-separated list for each network when specifying the mappings in the bridge_mappings option. For more information on bridge mappings, see Chapter 11, Configuring bridge mappings . 2.12. Layer 2 and layer 3 networking When designing your virtual network, anticipate where the majority of traffic is going to be sent. Network traffic moves faster within the same logical network, rather than between multiple logical networks. This is because traffic between logical networks (using different subnets) must pass through a router, resulting in additional latency. Consider the diagram below which has network traffic flowing between instances on separate VLANs: Note Even a high performance hardware router adds latency to this configuration. 2.12.1. Use switching where possible Because switching occurs at a lower level of the network (layer 2) it can function faster than the routing that occurs at layer 3. Design as few hops as possible between systems that communicate frequently. For example, the following diagram depicts a switched network that spans two physical nodes, allowing the two instances to communicate directly without using a router for navigation first. Note that the instances now share the same subnet, to indicate that they are on the same logical network: To allow instances on separate nodes to communicate as if they are on the same logical network, use an encapsulation tunnel such as VXLAN or GRE. Red Hat recommends adjusting the MTU size from end-to-end to accommodate the additional bits required for the tunnel header, otherwise network performance can be negatively impacted as a result of fragmentation. For more information, see Configure MTU Settings . You can further improve the performance of VXLAN tunneling by using supported hardware that features VXLAN offload capabilities. The full list is available here: https://access.redhat.com/articles/1390483 | [
"[ml2] type_drivers = local,flat,vlan,gre,vxlan,geneve",
"[ml2] mechanism_drivers = ovn",
"-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml",
"[ml2] type_drivers = local,flat,vlan,gre,vxlan,geneve mechanism_drivers = l2population",
"[agent] l2_population = True",
"[agent] l2_population = True arp_responder = True",
"type_drivers = vxlan,flat flat_networks =*",
"openstack network create --share --provider-network-type flat --provider-physical-network physnet1 --external public01",
"openstack subnet create --no-dhcp --allocation-pool start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01 public_subnet",
"systemctl restart tripleo_neutron_api",
"DEVICE=br-ex TYPE=OVSBridge DEVICETYPE=ovs ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none",
"DEVICE=eth1 TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none",
"bridge_mappings = physnet1:br-ex",
"systemctl restart neutron-openvswitch-agent",
"external_network_bridge =",
"systemctl restart neutron-l3-agent"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-networking-concepts |
2.12. OProfile | 2.12. OProfile OProfile is a system-wide performance monitoring tool. It uses the processor's dedicated performance monitoring hardware to retrieve information about the kernel and system executables to determine the frequency of certain events, such as when memory is referenced, the number of second-level cache requests, and the number of hardware requests received. OProfile can also be used to determine processor usage, and to determine which applications and services are used most often. However, OProfile does have several limitations: Performance monitoring samples may not be precise. Because the processor may execute instructions out of order, samples can be recorded from a nearby instruction instead of the instruction that triggered the interrupt. OProfile expects processes to start and stop multiple times. As such, samples from multiple runs are allowed to accumulate. You may need to clear the sample data from runs. OProfile focuses on identifying problems with processes limited by CPU access. It is therefore not useful for identifying processes that are sleeping while they wait for locks on other events. For more detailed information about OProfile, see Section A.14, "OProfile" , or the Red Hat Enterprise Linux 7 System Administrator's Guide . Alternatively, refer to the documentation on your system, located in /usr/share/doc/oprofile- version . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-oprofile |
11.4. Configuring Clustered Services | 11.4. Configuring Clustered Services The IdM server is not cluster aware . However, it is possible to configure a clustered service to be part of IdM by synchronizing Kerberos keys across all of the participating hosts and configuring services running on the hosts to respond to whatever names the clients use. Enroll all of the hosts in the cluster into the IdM domain. Create any service principals and generate the required keytabs. Collect any keytabs that have been set up for services on the host, including the host keytab at /etc/krb5.keytab . Use the ktutil command to produce a single keytab file that contains the contents of all of the keytab files. For each file, use the rkt command to read the keys from that file. Use the wkt command to write all of the keys which have been read to a new keytab file. Replace the keytab files on each host with the newly-created combined keytab file. At this point, each host in this cluster can now impersonate any other host. Some services require additional configuration to accommodate cluster members which do not reset hostnames when taking over a failed service. For sshd , set GSSAPIStrictAcceptorCheck no in /etc/ssh/sshd_config . For mod_auth_kerb , set KrbServiceName Any in /etc/httpd/conf.d/auth_kerb.conf . Note For SSL servers, the subject name or a subject alternative name for the server's certificate must appear correct when a client connects to the clustered host. If possible, share the private key among all of the hosts. If each cluster member contains a subject alternative name which includes the names of all the other cluster members, that satisfies any client connection requirements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/ipa-cluster |
17.12. Attaching a Virtual NIC Directly to a Physical Interface | 17.12. Attaching a Virtual NIC Directly to a Physical Interface As an alternative to the default NAT connection, you can use the macvtap driver to attach the guest's NIC directly to a specified physical interface of the host machine. This is not to be confused with device assignment (also known as passthrough). Macvtap connection has the following modes, each with different benefits and usecases: Physical interface delivery modes VEPA In virtual ethernet port aggregator (VEPA) mode, all packets from the guests are sent to the external switch. This enables the user to force guest traffic through the switch. For VEPA mode to work correctly, the external switch must also support hairpin mode , which ensures that packets whose destination is a guest on the same host machine as their source guest are sent back to the host by the external switch. Figure 17.23. VEPA mode bridge Packets whose destination is on the same host machine as their source guest are directly delivered to the target macvtap device. Both the source device and the destination device need to be in bridge mode for direct delivery to succeed. If either one of the devices is in VEPA mode, a hairpin-capable external switch is required. Figure 17.24. Bridge mode private All packets are sent to the external switch and will only be delivered to a target guest on the same host machine if they are sent through an external router or gateway and these send them back to the host. Private mode can be used to prevent the individual guests on the single host from communicating with each other. This procedure is followed if either the source or destination device is in private mode. Figure 17.25. Private mode passthrough This feature attaches a physical interface device or a SR-IOV Virtual Function (VF) directly to a guest without losing the migration capability. All packets are sent directly to the designated network device. Note that a single network device can only be passed through to a single guest, as a network device cannot be shared between guests in passthrough mode. Figure 17.26. Passthrough mode Macvtap can be configured by changing the domain XML file or by using the virt-manager interface. 17.12.1. Configuring macvtap using domain XML Open the domain XML file of the guest and modify the <devices> element as follows: The network access of direct attached guest virtual machines can be managed by the hardware switch to which the physical interface of the host physical machine is connected. The interface can have additional parameters as shown below, if the switch is conforming to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Also note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID. Virtual Station Interface types managerid The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved. typeid The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value. typeidversion The VSI Type Version allows multiple versions of a VSI Type. This is an integer value. instanceid The VSI Instance ID is generated when a VSI instance (a virtual interface of a virtual machine) is created. This is a globally unique identifier. profileid The profile ID contains the name of the port profile that is to be applied onto this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. Each of the four types is configured by changing the domain XML file. Once this file is opened, change the mode setting as shown: The profile ID is shown here: 17.12.2. Configuring macvtap using virt-manager Open the virtual hardware details window ⇒ select NIC in the menu ⇒ for Network source , select host device name : macvtap ⇒ select the intended Source mode . The virtual station interface types can then be set up in the Virtual port submenu. Figure 17.27. Configuring macvtap in virt-manager | [
"<devices> <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type=\"802.1Qbg\"> <parameters managerid=\"11\" typeid=\"1193047\" typeidversion=\"2\" instanceid=\"09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f\"/> </virtualport> </interface> </devices>",
"<devices> <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-directly_attaching_to_physical_interface |
Chapter 1. Preparing to install with the Agent-based Installer | Chapter 1. Preparing to install with the Agent-based Installer 1.1. About the Agent-based Installer The Agent-based installation method provides the flexibility to boot your on-premises servers in any way that you choose. It combines the ease of use of the Assisted Installation service with the ability to run offline, including in air-gapped environments. Agent-based installation is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image. The configuration is in the same format as for the installer-provisioned infrastructure and user-provisioned infrastructure installation methods. The Agent-based Installer can also optionally generate or accept Zero Touch Provisioning (ZTP) custom resources. ZTP allows you to provision new edge sites with declarative configurations of bare-metal equipment. Table 1.1. Agent-based Installer supported architectures CPU architecture Connected installation Disconnected installation 64-bit x86 [✓] [✓] 64-bit ARM [✓] [✓] ppc64le [✓] [✓] s390x [✓] [✓] 1.2. Understanding Agent-based Installer As an OpenShift Container Platform user, you can leverage the advantages of the Assisted Installer hosted service in disconnected environments. The Agent-based installation comprises a bootable ISO that contains the Assisted discovery agent and the Assisted Service. Both are required to perform the cluster installation, but the latter runs on only one of the hosts. Note Currently, ISO boot support on IBM Z(R) ( s390x ) is available only for Red Hat Enterprise Linux (RHEL) KVM, which provides the flexibility to choose either PXE or ISO-based installation. For installations with z/VM and Logical Partition (LPAR), only PXE boot is supported. The openshift-install agent create image subcommand generates an ephemeral ISO based on the inputs that you provide. You can choose to provide inputs through the following manifests: Preferred: install-config.yaml agent-config.yaml Optional: ZTP manifests cluster-manifests/cluster-deployment.yaml cluster-manifests/agent-cluster-install.yaml cluster-manifests/pull-secret.yaml cluster-manifests/infraenv.yaml cluster-manifests/cluster-image-set.yaml cluster-manifests/nmstateconfig.yaml mirror/registries.conf mirror/ca-bundle.crt 1.2.1. Agent-based Installer workflow One of the control plane hosts runs the Assisted Service at the start of the boot process and eventually becomes the bootstrap host. This node is called the rendezvous host (node 0). The Assisted Service ensures that all the hosts meet the requirements and triggers an OpenShift Container Platform cluster deployment. All the nodes have the Red Hat Enterprise Linux CoreOS (RHCOS) image written to the disk. The non-bootstrap nodes reboot and initiate a cluster deployment. Once the nodes are rebooted, the rendezvous host reboots and joins the cluster. The bootstrapping is complete and the cluster is deployed. Figure 1.1. Node installation workflow You can install a disconnected OpenShift Container Platform cluster through the openshift-install agent create image subcommand for the following topologies: A single-node OpenShift Container Platform cluster (SNO) : A node that is both a master and worker. A three-node OpenShift Container Platform cluster : A compact cluster that has three master nodes that are also worker nodes. Highly available OpenShift Container Platform cluster (HA) : Three master nodes with any number of worker nodes. 1.2.2. Recommended resources for topologies Recommended cluster resources for the following topologies: Table 1.2. Recommended cluster resources Topology Number of control plane nodes Number of compute nodes vCPU Memory Storage Single-node cluster 1 0 8 vCPUs 16 GB of RAM 120 GB Compact cluster 3 0 or 1 8 vCPUs 16 GB of RAM 120 GB HA cluster 3 2 and above 8 vCPUs 16 GB of RAM 120 GB In the install-config.yaml , specify the platform on which to perform the installation. The following platforms are supported: baremetal vsphere external none Important For platform none : The none option requires the provision of DNS name resolution and load balancing infrastructure in your cluster. See Requirements for a cluster using the platform "none" option in the "Additional resources" section for more information. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. Additional resources Requirements for a cluster using the platform "none" option Increase the network MTU Adding worker nodes to single-node OpenShift clusters 1.3. About FIPS compliance For many OpenShift Container Platform customers, regulatory readiness, or compliance, on some level is required before any systems can be put into production. That regulatory readiness can be imposed by national standards, industry standards or the organization's corporate governance framework. Federal Information Processing Standards (FIPS) compliance is one of the most critical components required in highly secure environments to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 1.4. Configuring FIPS through the Agent-based Installer During a cluster deployment, the Federal Information Processing Standards (FIPS) change is applied when the Red Hat Enterprise Linux CoreOS (RHCOS) machines are deployed in your cluster. For Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. You can enable FIPS mode through the preferred method of install-config.yaml and agent-config.yaml : You must set value of the fips field to True in the install-config.yaml file: Sample install-config.yaml.file apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True Optional: If you are using the GitOps ZTP manifests, you must set the value of fips as True in the Agent-install.openshift.io/install-config-overrides field in the agent-cluster-install.yaml file: Sample agent-cluster-install.yaml file apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{"fips": True}' name: sno-cluster namespace: sno-cluster-test Additional resources OpenShift Security Guide Book Support for FIPS cryptography 1.5. Host configuration You can make additional configurations for each host on the cluster in the agent-config.yaml file, such as network configurations and root device hints. Important For each host you configure, you must provide the MAC address of an interface on the host to specify which host you are configuring. 1.5.1. Host roles Each host in the cluster is assigned a role of either master or worker . You can define the role for each host in the agent-config.yaml file by using the role parameter. If you do not assign a role to the hosts, the roles will be assigned at random during installation. It is recommended to explicitly define roles for your hosts. The rendezvousIP must be assigned to a host with the master role. This can be done manually or by allowing the Agent-based Installer to assign the role. Important You do not need to explicitly define the master role for the rendezvous host, however you cannot create configurations that conflict with this assignment. For example, if you have 4 hosts with 3 of the hosts explicitly defined to have the master role, the last host that is automatically assigned the worker role during installation cannot be configured as the rendezvous host. Sample agent-config.yaml file apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8 1.5.2. About root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 1.3. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. If you use the udevadm command to retrieve the wwn value, and the command outputs a value for ID_WWN_WITH_EXTENSION , then you must use this value to specify the wwn subfield. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master rootDeviceHints: deviceName: "/dev/sda" 1.6. About networking The rendezvous IP must be known at the time of generating the agent ISO, so that during the initial boot all the hosts can check in to the assisted service. If the IP addresses are assigned using a Dynamic Host Configuration Protocol (DHCP) server, then the rendezvousIP field must be set to an IP address of one of the hosts that will become part of the deployed control plane. In an environment without a DHCP server, you can define IP addresses statically. In addition to static IP addresses, you can apply any network configuration that is in NMState format. This includes VLANs and NIC bonds. 1.6.1. DHCP Preferred method: install-config.yaml and agent-config.yaml You must specify the value for the rendezvousIP field. The networkConfig fields can be left blank: Sample agent-config.yaml.file apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 1 The IP address for the rendezvous host. 1.6.2. Static networking Preferred method: install-config.yaml and agent-config.yaml Sample agent-config.yaml.file cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.111.1 6 -hop-interface: eno1 table-id: 254 EOF 1 If a value is not specified for the rendezvousIP field, one address will be chosen from the static IP addresses specified in the networkConfig fields. 2 The MAC address of an interface on the host, used to determine which host to apply the configuration to. 3 The static IP address of the target bare metal host. 4 The static IP address's subnet prefix for the target bare metal host. 5 The DNS server for the target bare metal host. 6 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. Optional method: GitOps ZTP manifests The optional method of the GitOps ZTP custom resources comprises 6 custom resources; you can configure static IPs in the nmstateconfig.yaml file. apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.122.1 4 -hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5 1 The static IP address of the target bare metal host. 2 The static IP address's subnet prefix for the target bare metal host. 3 The DNS server for the target bare metal host. 4 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 5 The MAC address of an interface on the host, used to determine which host to apply the configuration to. The rendezvous IP is chosen from the static IP addresses specified in the config fields. 1.7. Requirements for a cluster using the platform "none" option This section describes the requirements for an Agent-based OpenShift Container Platform installation that is configured to use the platform none option. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.7.1. Platform "none" DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The control plane and compute machines Reverse DNS resolution is also required for the Kubernetes API, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. The following DNS records are required for an OpenShift Container Platform cluster using the platform none option and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.4. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Control plane machines <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <worker><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. 1.7.1.1. Example DNS configuration for platform "none" clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform using the platform none option. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a platform "none" cluster The following example is a BIND zone file that shows sample A records for name resolution in a cluster using the platform none option. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 5 6 Provides name resolution for the control plane machines. 7 8 Provides name resolution for the compute machines. Example DNS PTR record configuration for a platform "none" cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a cluster using the platform none option. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 4 5 Provides reverse DNS resolution for the control plane machines. 6 7 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.7.2. Platform "none" Load balancing requirements Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note These requirements do not apply to single-node OpenShift clusters using the platform none option. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 1.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.7.2.1. Example load balancer configuration for platform "none" clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for clusters using the platform none option. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 Port 22623 handles the machine config server traffic and points to the control plane machines. 3 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 4 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.8. Example: Bonds and VLAN interface node network configuration The following agent-config.yaml file is an example of a manifest for bond and VLAN interfaces. apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: "150" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.10.10.10 9 -hop-interface: bond0.300 10 table-id: 254 1 3 Name of the interface. 2 The type of interface. This example creates a VLAN. 4 The type of interface. This example creates a bond. 5 The mac address of the interface. 6 The mode attribute specifies the bonding mode. 7 Specifies the MII link monitoring frequency in milliseconds. This example inspects the bond link every 150 milliseconds. 8 Optional: Specifies the search and server settings for the DNS server. 9 hop address for the node traffic. This must be in the same subnet as the IP address set for the specified interface. 10 hop interface for the node traffic. 1.9. Example: Bonds and SR-IOV dual-nic node network configuration Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following agent-config.yaml file is an example of a manifest for dual port NIC with a bond and SR-IOV interfaces: apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field contains information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates an ethernet interface. 5 Set this to false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set this to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Additional resources Configuring network bonding 1.10. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{"auths": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 This parameter controls the number of compute machines that the Agent-based installation waits to discover before triggering the installation process. It is the number of compute machines that must be booted with the generated ISO. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 5 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 8 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 10 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 11 You must set the platform to none for a single-node cluster. You can set the platform to vsphere , baremetal , or none for multi-node clusters. Note If you set the platform to vsphere or baremetal , you can configure IP address endpoints for cluster nodes in three ways: IPv4 IPv6 IPv4 and IPv6 in parallel (dual-stack) Example of dual-stack networking networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 14 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.11. Validation checks before agent ISO creation The Agent-based Installer performs validation checks on user defined YAML files before the ISO is created. Once the validations are successful, the agent ISO is created. install-config.yaml baremetal , vsphere and none platforms are supported. The networkType parameter must be OVNKubernetes in the case of none platform. apiVIPs and ingressVIPs parameters must be set for bare metal and vSphere platforms. Some host-specific fields in the bare metal platform configuration that have equivalents in agent-config.yaml file are ignored. A warning message is logged if these fields are set. agent-config.yaml Each interface must have a defined MAC address. Additionally, all interfaces must have a different MAC address. At least one interface must be defined for each host. World Wide Name (WWN) vendor extensions are not supported in root device hints. The role parameter in the host object must have a value of either master or worker . 1.11.1. ZTP manifests agent-cluster-install.yaml For IPv6, the only supported value for the networkType parameter is OVNKubernetes . The OpenshiftSDN value can be used only for IPv4. cluster-image-set.yaml The ReleaseImage parameter must match the release defined in the installer. 1.12. steps Installing a cluster Installing a cluster with customizations | [
"apiVersion: v1 baseDomain: test.example.com metadata: name: sno-cluster fips: True",
"apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: annotations: agent-install.openshift.io/install-config-overrides: '{\"fips\": True}' name: sno-cluster namespace: sno-cluster-test",
"apiVersion: v1beta1 kind: AgentConfig metadata: name: example-cluster rendezvousIP: 192.168.111.80 hosts: - hostname: master-1 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 - hostname: master-2 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a6 - hostname: master-3 role: master interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a7 - hostname: worker-1 role: worker interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a8",
"- name: master-0 role: master rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1",
"cat > agent-config.yaml << EOF apiVersion: v1alpha1 kind: AgentConfig metadata: name: sno-cluster rendezvousIP: 192.168.111.80 1 hosts: - hostname: master-0 interfaces: - name: eno1 macAddress: 00:ef:44:21:e6:a5 2 networkConfig: interfaces: - name: eno1 type: ethernet state: up mac-address: 00:ef:44:21:e6:a5 ipv4: enabled: true address: - ip: 192.168.111.80 3 prefix-length: 23 4 dhcp: false dns-resolver: config: server: - 192.168.111.1 5 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.111.1 6 next-hop-interface: eno1 table-id: 254 EOF",
"apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: master-0 namespace: openshift-machine-api labels: cluster0-nmstate-label-name: cluster0-nmstate-label-value spec: config: interfaces: - name: eth0 type: ethernet state: up mac-address: 52:54:01:aa:aa:a1 ipv4: enabled: true address: - ip: 192.168.122.2 1 prefix-length: 23 2 dhcp: false dns-resolver: config: server: - 192.168.122.1 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.122.1 4 next-hop-interface: eth0 table-id: 254 interfaces: - name: eth0 macAddress: 52:54:01:aa:aa:a1 5",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; master0.ocp4.example.com. IN A 192.168.1.97 4 master1.ocp4.example.com. IN A 192.168.1.98 5 master2.ocp4.example.com. IN A 192.168.1.99 6 ; worker0.ocp4.example.com. IN A 192.168.1.11 7 worker1.ocp4.example.com. IN A 192.168.1.7 8 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 97.1.168.192.in-addr.arpa. IN PTR master0.ocp4.example.com. 3 98.1.168.192.in-addr.arpa. IN PTR master1.ocp4.example.com. 4 99.1.168.192.in-addr.arpa. IN PTR master2.ocp4.example.com. 5 ; 11.1.168.192.in-addr.arpa. IN PTR worker0.ocp4.example.com. 6 7.1.168.192.in-addr.arpa. IN PTR worker1.ocp4.example.com. 7 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp server master0 master0.ocp4.example.com:6443 check inter 1s server master1 master1.ocp4.example.com:6443 check inter 1s server master2 master2.ocp4.example.com:6443 check inter 1s listen machine-config-server-22623 2 bind *:22623 mode tcp server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 3 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 4 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: master0 role: master interfaces: - name: enp0s4 macAddress: 00:21:50:90:c0:10 - name: enp0s5 macAddress: 00:21:50:90:c0:20 networkConfig: interfaces: - name: bond0.300 1 type: vlan 2 state: up vlan: base-iface: bond0 id: 300 ipv4: enabled: true address: - ip: 10.10.10.14 prefix-length: 24 dhcp: false - name: bond0 3 type: bond 4 state: up mac-address: 00:21:50:90:c0:10 5 ipv4: enabled: false ipv6: enabled: false link-aggregation: mode: active-backup 6 options: miimon: \"150\" 7 port: - enp0s4 - enp0s5 dns-resolver: 8 config: server: - 10.10.10.11 - 10.10.10.12 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.10.10.10 9 next-hop-interface: bond0.300 10 table-id: 254",
"apiVersion: v1alpha1 kind: AgentConfig rendezvousIP: 10.10.10.14 hosts: - hostname: worker-1 interfaces: - name: eno1 macAddress: 0c:42:a1:55:f3:06 - name: eno2 macAddress: 0c:42:a1:55:f3:07 networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false dhcp: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - name: worker replicas: 0 3 architecture: amd64 controlPlane: 4 name: master replicas: 1 5 architecture: amd64 metadata: name: sno-cluster 6 networking: clusterNetwork: - cidr: 10.128.0.0/14 7 hostPrefix: 23 8 networkType: OVNKubernetes 9 serviceNetwork: 10 - 172.30.0.0/16 platform: none: {} 11 fips: false 12 pullSecret: '{\"auths\": ...}' 13 sshKey: 'ssh-ed25519 AAAA...' 14",
"networking: clusterNetwork: - cidr: 172.21.0.0/16 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 machineNetwork: - cidr: 192.168.11.0/16 - cidr: 2001:DB8::/32 serviceNetwork: - 172.22.0.0/16 - fd03::/112 networkType: OVNKubernetes platform: baremetal: apiVIPs: - 192.168.11.3 - 2001:DB8::4 ingressVIPs: - 192.168.11.4 - 2001:DB8::5"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-to-install-with-agent-based-installer |
Chapter 3. Upgrading a standalone Manager remote database environment | Chapter 3. Upgrading a standalone Manager remote database environment 3.1. Upgrading a Remote Database Environment from Red Hat Virtualization 4.3 to 4.4 Upgrading your environment from 4.3 to 4.4 involves the following steps: Upgrade Considerations When planning to upgrade, see Red Hat Virtualization 4.4 upgrade considerations and known issues . When upgrading from Open Virtual Network (OVN) and Open vSwitch (OvS) 2.11 to OVN 2021 and OvS 2.15, the process is transparent to the user as long as the following conditions are met: The Manager is upgraded first. The ovirt-provider-ovn security groups must be disabled, before the host upgrade, for all OVN networks that are expected to work between hosts with OVN/OvS version 2.11. The hosts are upgraded to match OVN version 2021 or higher and OvS version 2.15. You must complete this step in the Administration Portal, so you can properly reconfigure OVN and refresh the certificates. The host is rebooted after an upgrade. Note To verify whether the provider and OVN were configured successfully on the host, check the OVN configured flag on the General tab for the host. If the OVN Configured is set to No , click Management Refresh Capabilities . This setting is also available in the REST API. If refreshing the capabilities fails, you can configure OVN by reinstalling the host from Manager 4.4 or higher. Make sure you meet the prerequisites, including enabling the correct repositories. Use the Log Collection Analysis tool and Image Discrepancies tool to check for issues that might prevent a successful upgrade. Update the 4.3 Manager to the latest version of 4.3. Upgrade the Manager from 4.3 to 4.4. Upgrade the remote Data Warehouse service and database. Migrate hosts and virtual machines while reducing virtual machine downtime. Optional: Upgrade RHVH while preserving local storage. Update the compatibility version of the clusters. Reboot any running or suspended virtual machines to update their configuration. Update the compatibility version of the data centers. 3.1.1. Prerequisites Plan for any necessary virtual machine downtime. After you update the clusters' compatibility versions during the upgrade, a new hardware configuration is automatically applied to each virtual machine once it reboots. You must reboot any running or suspended virtual machines as soon as possible to apply the configuration changes. Ensure your environment meets the requirements for Red Hat Virtualization 4.4. For a complete list of prerequisites, see the Planning and Prerequisites Guide . When upgrading Red Hat Virtualization Manager, it is recommended that you use one of the existing hosts. If you decide to use a new host, you must assign a unique name to the new host and then add it to the existing cluster before you begin the upgrade procedure. 3.1.2. Analyzing the Environment It is recommended to run the Log Collection Analysis tool and the Image Discrepancies tool prior to performing updates and for troubleshooting. These tools analyze your environment for known issues that might prevent you from performing an update, and provide recommendations to resolve them. 3.1.3. Log Collection Analysis tool Run the Log Collection Analysis tool prior to performing updates and for troubleshooting. The tool analyzes your environment for known issues that might prevent you from performing an update, and provides recommendations to resolve them. The tool gathers detailed information about your system and presents it as an HTML file. Prerequisites Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure Install the Log Collection Analysis tool on the Manager machine: Run the tool: A detailed report is displayed. By default, the report is saved to a file called analyzer_report.html . To save the file to a specific location, use the --html flag and specify the location: # rhv-log-collector-analyzer --live --html=/ directory / filename .html You can use the ELinks text mode web browser to read the analyzer reports within the terminal. To install the ELinks browser: Launch ELinks and open analyzer_report.html . To navigate the report, use the following commands in ELinks: Insert to scroll up Delete to scroll down PageUp to page up PageDown to page down Left Bracket to scroll left Right Bracket to scroll right 3.1.3.1. Monitoring snapshot health with the image discrepancies tool The RHV Image Discrepancies tool analyzes image data in the Storage Domain and RHV Database. It alerts you if it finds discrepancies in volumes and volume attributes, but does not fix those discrepancies. Use this tool in a variety of scenarios, such as: Before upgrading versions, to avoid carrying over broken volumes or chains to the new version. Following a failed storage operation, to detect volumes or attributes in a bad state. After restoring the RHV database or storage from backup. Periodically, to detect potential problems before they worsen. To analyze a snapshot- or live storage migration-related issues, and to verify system health after fixing these types of problems. Prerequisites Required Versions: this tool was introduced in RHV version 4.3.8 with rhv-log-collector-analyzer-0.2.15-0.el7ev . Because data collection runs simultaneously at different places and is not atomic, stop all activity in the environment that can modify the storage domains. That is, do not create or remove snapshots, edit, move, create, or remove disks. Otherwise, false detection of inconsistencies may occur. Virtual Machines can remain running normally during the process. Procedure To run the tool, enter the following command on the RHV Manager: If the tool finds discrepancies, rerun it to confirm the results, especially if there is a chance some operations were performed while the tool was running. Note This tool includes any Export and ISO storage domains and may report discrepancies for them. If so, these can be ignored, as these storage domains do not have entries for images in the RHV database. Understanding the results The tool reports the following: If there are volumes that appear on the storage but are not in the database, or appear in the database but are not on the storage. If some volume attributes differ between the storage and the database. Sample output: You can now update the Manager to the latest version of 4.3. 3.1.4. Updating the Red Hat Virtualization Manager Prerequisites Ensure the Manager has the correct repositories enabled . For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.3. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Procedure On the Manager machine, check if updated packages are available: Update the setup packages: # yum update ovirt\*setup\* rh\*vm-setup-plugins Update the Red Hat Virtualization Manager with the engine-setup script. The engine-setup script prompts you with some configuration questions, then stops the ovirt-engine service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts the ovirt-engine service. When the script completes successfully, the following message appears: Note The engine-setup script is also used during the Red Hat Virtualization Manager installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date if engine-config was used to update configuration after installation. For example, if engine-config was used to update SANWipeAfterDelete to true after installation, engine-setup will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten by engine-setup . Important The update process might take some time. Do not stop the process before it completes. Update the base operating system and any optional packages installed on the Manager: Important If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict) . Important If any kernel packages were updated, reboot the machine to complete the update. You can now upgrade the Manager to 4.4. 3.1.5. Upgrading the Red Hat Virtualization Manager from 4.3 to 4.4 Red Hat Virtualization Manager 4.4 is only supported on Red Hat Enterprise Linux versions 8.2 to 8.6. You need to do a clean installation of Red Hat Enterprise Linux 8.6 and Red Hat Virtualization Manager 4.4, even if you are using the same physical machine that you use to run RHV Manager 4.3. The upgrade process requires restoring Red Hat Virtualization Manager 4.3 backup files onto the Red Hat Virtualization Manager 4.4 machine. Prerequisites All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3. All virtual machines in the environment must have the cluster compatibility level set to version 4.3. If you use an external CA to sign HTTPS certificates, follow the steps in Replacing the Red Hat Virtualization Manager CA Certificate in the Administration Guide . The backup and restore include the 3rd-party certificate, so you should be able to log in to the Administration portal after the upgrade. Ensure the CA certificate is added to system-wide trust stores of all clients to ensure the foreign menu of virt-viewer works. See BZ#1313379 for more information. Note Connected hosts and virtual machines can continue to work while the Manager is being upgraded. Procedure Log in to the Manager machine. Back up the Red Hat Virtualization Manager 4.3 environment. # engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log Copy the backup file to a storage device outside of the RHV environment. Install Red Hat Enterprise Linux 8.6. See Performing a standard RHEL installation for more information. Complete the steps to install Red Hat Virtualization Manager 4.4, including running the command yum install rhvm , but do not run engine-setup . See one of the Installing Red Hat Virtualization guides for more information. Copy the backup file to the Red Hat Virtualization Manager 4.4 machine and restore it. # engine-backup --mode=restore --file=backup.bck --provision-all-databases Note If the backup contained grants for extra database users, this command creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See https://access.redhat.com/articles/2686731 . Ensure the Manager has the correct repositories enabled. For the list of required repositories, see Enabling the Red Hat Virtualization Manager Repositories for Red Hat Virtualization 4.4. Updates to the Red Hat Virtualization Manager are released through the Content Delivery Network. Install optional extension packages if they were installed on the Red Hat Virtualization Manager 4.3 machine. # yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc Note The ovirt-engine-extension-aaa-ldap is deprecated. For new installations, use Red Hat Single Sign On. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide . Note The configuration for these package extensions must be manually reapplied because they are not migrated as part of the backup and restore process. Configure the Manager by running the engine-setup command: # engine-setup Decommission the Red Hat Virtualization Manager 4.3 machine if a different machine is used for Red Hat Virtualization Manager 4.4. Two different Managers must not manage the same hosts or storage. The Red Hat Virtualization Manager 4.4 is now installed, with the cluster compatibility version set to 4.2 or 4.3, whichever was the preexisting cluster compatibility version. Now you need to upgrade the remote databases in your environment. Note 'engine-setup' also stops the Data Warehouse service on the remote Data Warehouse machine. If you intend to postpone the parts of this procedure, log in to the Data Warehouse machine and start the Data Warehouse service: # systemctl start ovirt-engine-dwhd.service Additional resources Installing Red Hat Virtualization as a standalone Manager with local databases Installing Red Hat Virtualization as a standalone Manager with remote databases 3.1.6. Upgrading the remote Data Warehouse service and database Run this procedure on the remote machine with the Data Warehouse service and database. Notice that part of this procedure requires you to install Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host 4.4. Prerequisites You are logged in to the Data Warehouse machine. A storage device outside the RHV environment. Procedure Back up the Data Warehouse machine. Note Grafana is not supported on RHV 4.3, but on RHV 4.4, this command also includes the Grafana service and the Grafana database. # engine-backup --file= <backupfile> Copy the backup file to a storage device. Stop and disable the Data Warehouse service: # systemctl stop ovirt-engine-dwhd # systemctl disable ovirt-engine-dwhd Reinstall the Data Warehouse machine with Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host 4.4. Prepare a PostgreSQL database. For information, see Preparing a Remote PostgreSQL Database in Installing Red Hat Virtualization as a standalone Manager with remote databases . Enable the correct repositories on the server and install the Data Warehouse service. For detailed instructions, see Installing and Configuring Data Warehouse on a Separate Machine for Red Hat Virtualization 4.4. Complete the steps in that procedure up to and including the dnf install ovirt-engine-dwh-setup command. Then continue to the step in this procedure. Copy the backup file from the storage device to the Data Warehouse machine. Restore the backup file: # engine-backup --mode=restore --file=backup.bck --provision-all-databases On the Data Warehouse machine, run the engine-setup command: # engine-setup On the Manager machine, restart the Manager to connect it to the Data Warehouse database: # systemctl restart ovirt-engine Additional resources Performing a standard RHEL installation Installing Hosts for Red Hat Virtualization in Installing Red Hat Virtualization as a standalone Manager with remote databases You can now update the hosts. 3.1.7. Migrating hosts and virtual machines from RHV 4.3 to 4.4 You can migrate hosts and virtual machines from Red Hat Virtualization 4.3 to 4.4 such that you minimize the downtime of virtual machines in your environment. This process requires migrating all virtual machines from one host so as to make that host available to upgrade to RHV 4.4. After the upgrade, you can reattach the host to the Manager. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Note CPU-passthrough virtual machines might not migrate properly from RHV 4.3 to RHV 4.4. RHV 4.3 and RHV 4.4 are based on RHEL 7 and RHEL 8, respectively, which have different kernel versions with different CPU flags and microcodes. This can cause problems in migrating CPU-passthrough virtual machines. Prerequisites Hosts for RHV 4.4 require Red Hat Enterprise Linux versions 8.2 to 8.6. A clean installation of Red Hat Enterprise Linux 8.6, or Red Hat Virtualization Host 4.4 is required, even if you are using the same physical machine that you use to run hosts for RHV 4.3. Red Hat Virtualization Manager 4.4 is installed and running. The compatibility level of the data center and cluster to which the hosts belong is set to 4.2 or 4.3. All data centers and clusters in the environment must have the cluster compatibility level set to version 4.2 or 4.3 before you start the procedure. Procedure Pick a host to upgrade and migrate that host's virtual machines to another host in the same cluster. You can use Live Migration to minimize virtual machine downtime. For more information, see Migrating Virtual Machines Between Hosts in the Virtual Machine Management Guide . Put the host into maintenance mode and remove the host from the Manager. For more information, see Removing a Host in the Administration Guide . Install Red Hat Enterprise Linux 8.6, or RHVH 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides. Install the appropriate packages to enable the host for RHV 4.4. For more information, see Installing Hosts for Red Hat Virtualization in one of the Installing Red Hat Virtualization guides. Add this host to the Manager, assigning it to the same cluster. You can now migrate virtual machines onto this host. For more information, see Adding Standard Hosts to the Manager in one of the Installing Red Hat Virtualization guides. Repeat these steps to migrate virtual machines and upgrade hosts for the rest of the hosts in the same cluster, one by one, until all are running Red Hat Virtualization 4.4. Additional resources Installing Red Hat Virtualization as a self-hosted engine using the command line Installing Red Hat Virtualization as a standalone Manager with local databases Installing Red Hat Virtualization as a standalone Manager with remote databases 3.1.8. Upgrading RHVH while preserving local storage Environments with local storage cannot migrate virtual machines to a host in another cluster because the local storage is not shared with other storage domains. To upgrade RHVH 4.3 hosts that have a local storage domain, reinstall the host while preserving the local storage, create a new local storage domain in the 4.4 environment, and import the local storage into the new domain. Prerequisites Red Hat Virtualization Manager 4.4 is installed and running. The compatibility level of the data center and cluster to which the host belongs is set to 4.2 or 4.3. Procedure Ensure that the local storage on the RHVH 4.3 host's local storage is in maintenance mode before starting this process. Complete these steps: Open the Data Centers tab. Click the Storage tab in the Details pane and select the storage domain in the results list. Click Maintenance . Reinstall the Red Hat Virtualization Host, as described in Installing Red Hat Virtualization Host in the Installation Guide . Important When selecting the device on which to install RHVH from the Installation Destination screen, do not select the device(s) storing the virtual machines. Only select the device where the operating system should be installed. If you are using Kickstart to install the host, ensure that you preserve the devices containing the virtual machines by adding the following to the Kickstart file, replacing `device` with the relevant device. # clearpart --all --drives= device For more information on using Kickstart, see Kickstart references in Red Hat Enterprise Linux 8 Performing an advanced RHEL installation . On the reinstalled host, create a directory, for example /data in which to recover the environment. # mkdir /data Mount the local storage in the new directory. In our example, /dev/sdX1 is the local storage: # mount /dev/sdX1 /data Set the following permissions for the new directory. # chown -R 36:36 /data # chmod -R 0755 /data Red Hat recommends that you also automatically mount the local storage via /etc/fstab in case the server requires a reboot: # blkid | grep -i sdX1 /dev/sdX1: UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" TYPE="ext4" # vi /etc/fstab UUID="a81a6879-3764-48d0-8b21-2898c318ef7c" /data ext4 defaults 0 0 In the Administration Portal, create a data center and select Local in the Storage Type drop-down menu. Configure a cluster on the new data center. See Creating a New Cluster in the Administration Guide for more information. Add the host to the Manager. See Adding Standard Hosts to the Red Hat Virtualization Manager in one of the Installing Red Hat Virtualization guides for more information. On the host, create a new directory that will be used to create the initial local storage domain. For example: # mkdir -p /localfs # chown 36:36 /localfs # chmod -R 0755 /localfs In the Administration Portal, open the Storage tab and click New Domain to create a new local storage domain. Set the name to localfs and set the path to /localfs . Once the local storage is active, click Import Domain and set the domain's details. For example, define Data as the name, Local on Host as the storage type and /data as the path. Click OK to confirm the message that appears informing you that storage domains are already attached to the data center. Activate the new storage domain: Open the Data Centers tab. Click the Storage tab in the details pane and select the new data storage domain in the results list. Click Activate . Once the new storage domain is active, import the virtual machines and their disks: In the Storage tab, select data . Select the VM Import tab in the details pane, select the virtual machines and click Import . See Importing Virtual Machines from a Data Domain in the Virtual Machine Management Guide for more details. Once you have ensured that all virtual machines have been successfully imported and are functioning properly, you can move localfs to maintenance mode. Click the Storage tab and select localfs from the results list. Click the Data Center tab in the details pane. Click Maintenance, then click OK to move the storage domain to maintenance mode. Click Detach . The Detach Storage confirmation window opens. Click OK . You have now upgraded the host to version 4.4, created a new local storage domain, and imported the 4.3 storage domain and its virtual machines. You can now update the cluster compatibility version. 3.1.9. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Prerequisites To change the cluster compatibility level, you must first update all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Limitations Virtio NICs are enumerated as a different device after upgrading the cluster compatibility level to 4.6. Therefore, the NICs might need to be reconfigured. Red Hat recommends that you test the virtual machines before you upgrade the cluster by setting the cluster compatibility level to 4.6 on the virtual machine and verifying the network connection. If the network connection for the virtual machine fails, configure the virtual machine with a custom emulated machine that matches the current emulated machine, for example pc-q35-rhel8.3.0 for 4.5 compatibility version, before upgrading the cluster. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. You can now update the cluster compatibility version for virtual machines in the cluster. 3.1.10. Changing Virtual Machine Cluster Compatibility After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, or from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Any virtual machine that has not been rebooted runs with the configuration, and subsequent configuration changes made to the virtual machine might overwrite its pending cluster compatibility changes. Procedure In the Administration Portal, click Compute Virtual Machines . Check which virtual machines require a reboot. In the Vms: search bar, enter the following query: next_run_config_exists=True The search results show all virtual machines with pending changes. Select each virtual machine and click Restart . Alternatively, if necessary you can reboot a virtual machine from within the virtual machine itself. When the virtual machine starts, the new compatibility version is automatically applied. Note You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. You can now update the data center compatibility version. 3.1.11. Changing the Data Center Compatibility Version Red Hat Virtualization data centers have a compatibility version. The compatibility version indicates the version of Red Hat Virtualization with which the data center is intended to be compatible. All clusters in the data center must support the desired compatibility level. Prerequisites To change the data center compatibility level, you must first update the compatibility version of all clusters and virtual machines in the data center. Procedure In the Administration Portal, click Compute Data Centers . Select the data center to change and click Edit . Change the Compatibility Version to the desired value. Click OK . The Change Data Center Compatibility Version confirmation dialog opens. Click OK to confirm. | [
"yum install rhv-log-collector-analyzer",
"rhv-log-collector-analyzer --live",
"rhv-log-collector-analyzer --live --html=/ directory / filename .html",
"yum install -y elinks",
"elinks /home/user1/analyzer_report.html",
"rhv-image-discrepancies",
"Checking storage domain c277ad93-0973-43d9-a0ca-22199bc8e801 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes image ef325650-4b39-43cf-9e00-62b9f7659020 has a different attribute capacity on storage(2696984576) and on DB(2696986624) image 852613ce-79ee-4adc-a56a-ea650dcb4cfa has a different attribute capacity on storage(5424252928) and on DB(5424254976) Checking storage domain c64637b4-f0e8-408c-b8af-6a52946113e2 Looking for missing images No missing images found Checking discrepancies between SD/DB attributes No discrepancies found",
"engine-upgrade-check",
"yum update ovirt\\*setup\\* rh\\*vm-setup-plugins",
"engine-setup",
"Execution of setup completed successfully",
"yum update --nobest",
"engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log",
"engine-backup --mode=restore --file=backup.bck --provision-all-databases",
"yum install ovirt-engine-extension-aaa-ldap ovirt-engine-extension-aaa-misc",
"engine-setup",
"systemctl start ovirt-engine-dwhd.service",
"engine-backup --file= <backupfile>",
"systemctl stop ovirt-engine-dwhd systemctl disable ovirt-engine-dwhd",
"engine-backup --mode=restore --file=backup.bck --provision-all-databases",
"engine-setup",
"systemctl restart ovirt-engine",
"clearpart --all --drives= device",
"mkdir /data",
"mount /dev/sdX1 /data",
"chown -R 36:36 /data chmod -R 0755 /data",
"blkid | grep -i sdX1 /dev/sdX1: UUID=\"a81a6879-3764-48d0-8b21-2898c318ef7c\" TYPE=\"ext4\" vi /etc/fstab UUID=\"a81a6879-3764-48d0-8b21-2898c318ef7c\" /data ext4 defaults 0 0",
"mkdir -p /localfs chown 36:36 /localfs chmod -R 0755 /localfs",
"next_run_config_exists=True"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/upgrade_guide/upgrading-standalone-engine-remote-database-environment |
Chapter 9. Application credentials | Chapter 9. Application credentials Use Application Credentials to avoid the practice of embedding user account credentials in configuration files. Instead, the user creates an Application Credential that receives delegated access to a single project and has its own distinct secret. The user can also limit the delegated privileges to a single role in that project. This allows you to adopt the principle of least privilege, where the authenticated service gains access only to the one project and role that it needs to function, rather than all projects and roles. You can use this methodology to consume an API without revealing your user credentials, and applications can authenticate to Keystone without requiring embedded user credentials. You can use Application Credentials to generate tokens and configure keystone_authtoken settings for applications. These use cases are described in the following sections. Note The Application Credential is dependent on the user account that created it, so it will terminate if that account is ever deleted, or loses access to the relevant role. 9.1. Using Application Credentials to generate tokens Application Credentials are available to users as a self-service function in the dashboard. This example demonstrates how a user can create an Application Credential and then use it to generate a token. Create a test project, and test user accounts: Create a project called AppCreds : Create a user called AppCredsUser : Grant AppCredsUser access to the member role for the AppCreds project: Log in to the dashboard as AppCredsUser and create an Application Credential: Overview Identity Application Credentials +Create Application Credential . Note Ensure that you download the clouds.yaml file contents, because you cannot access it again after you close the pop-up window titled Your Application Credential . Create a file named /home/stack/.config/openstack/clouds.yaml using the CLI and paste the contents of the clouds.yaml file. Note These values will be different for your deployment. Use the Application Credential to generate a token. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file. Note If you receive an error similar to init () got an unexpected keyword argument 'application_credential_secret' , then you might still be sourced to the credentials. For a fresh environment, run sudo su - stack . 9.2. Integrating Application Credentials with applications Application Credentials can be used to authenticate applications to keystone. When you use Application Credentials, the keystone_authtoken settings use v3applicationcredential as the authentication type and contain the credentials that you receive during the credential creation process. Enter the following values: application_credential_secret : The Application Credential secret. application_credential_id : The Application Credential id. (Optional) application_credential_name : You might use this parameter if you use a named application credential, rather than an ID. For example: 9.3. Managing Application Credentials You can use the command line to create and delete Application Credentials. The create subcommand creates an application credential based on the currently sourced account. For example, creating the credential when sourced as an admin user will grant the same roles to the Application Credential: Warning Using the --unrestricted parameter enables the application credential to create and delete other application credentials and trusts. This is potentially dangerous behavior and is disabled by default. You cannot use the --unrestricted parameter in combination with other access rules. By default, the resulting role membership includes all the roles assigned to the account that created the credentials. You can limit the role membership by delegating access only to a specific role: To delete an Application Credential: 9.4. Replacing Application Credentials Application credentials are bound to the user account that created them and become invalid if the user account is ever deleted, or if the user loses access to the delegated role. As a result, you should be prepared to generate a new application credential as needed. Replacing existing application credentials for configuration files Update the application credentials assigned to an application (using a configuration file): Create a new set of application credentials. Add the new credentials to the application configuration file, replacing the existing credentials. For more information, see Integrating Application Credentials with applications . Restart the application service to apply the change. Delete the old application credential, if appropriate. For more information about the command line options, see Managing Application Credentials . Replacing the existing application credentials in clouds.yaml When you replace an application credential used by clouds.yaml , you must create the replacement credentials using OpenStack user credentials. By default, you cannot use application credentials to create another set of application credentials. The openstack application credential create command creates an application credential based on the currently sourced account. Authenticate as the OpenStack user that originally created the authentication credentials that are about to expire. For example, if you used the procedure Using Application Credentials to generate tokens , you must log in again as AppCredsUser . Create an Application Credential called AppCred2 . This can be done using the OpenStack Dashboard, or the openstack CLI interface: Copy the id and secret parameters from the output of the command. The secret parameter value cannot be accessed again. Replace the application_credential_id and application_credential_secret parameter values in the USD{HOME}/.config/openstack/clouds.yaml file with the secret and id values that you copied. Verification Generate a token with clouds.yaml to confirm that the credentials are working as expected. You must not be sourced as any specific user when using the following command, and you must be in the same directory as your clouds.yaml file: Example output: | [
"openstack project create AppCreds",
"openstack user create --project AppCreds --password-prompt AppCredsUser",
"openstack role add --user AppCredsUser --project AppCreds member",
"This is a clouds.yaml file, which can be used by OpenStack tools as a source of configuration on how to connect to a cloud. If this is your only cloud, just put this file in ~/.config/openstack/clouds.yaml and tools like python-openstackclient will just work with no further config. (You will need to add your password to the auth section) If you have more than one cloud account, add the cloud entry to the clouds section of your existing file and you can refer to them by name with OS_CLOUD=openstack or --os-cloud=openstack clouds: openstack: auth: auth_url: http://10.0.0.10:5000/v3 application_credential_id: \"6d141f23732b498e99db8186136c611b\" application_credential_secret: \"<example secret value>\" region_name: \"regionOne\" interface: \"public\" identity_api_version: 3 auth_type: \"v3applicationcredential\"",
"[stack@undercloud-0 openstack]USD openstack --os-cloud=openstack token issue +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+",
"[keystone_authtoken] auth_url = http://10.0.0.10:5000/v3 auth_type = v3applicationcredential application_credential_id = \"6cb5fa6a13184e6fab65ba2108adf50c\" application_credential_secret = \"<example password>\"",
"openstack application credential create --description \"App Creds - All roles\" AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - All roles | | expires_at | None | | id | fc17651c2c114fd6813f86fdbb430053 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member reader admin | | secret | fVnqa6I_XeRDDkmQnB5lx361W1jHtOtw3ci_mf_tOID-09MrPAzkU7mv-by8ykEhEa1QLPFJLNV4cS2Roo9lOg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+",
"openstack application credential create --description \"App Creds - Member\" --role member AppCredsUser +--------------+----------------------------------------------------------------------------------------+ | Field | Value | +--------------+----------------------------------------------------------------------------------------+ | description | App Creds - Member | | expires_at | None | | id | e21e7f4b578240f79814085a169c9a44 | | name | AppCredsUser | | project_id | 507663d0cfe244f8bc0694e6ed54d886 | | roles | member | | secret | XCLVUTYIreFhpMqLVB5XXovs_z9JdoZWpdwrkaG1qi5GQcmBMUFG7cN2htzMlFe5T5mdPsnf5JMNbu0Ih-4aCg | | unrestricted | False | +--------------+----------------------------------------------------------------------------------------+",
"openstack application credential delete AppCredsUser",
"openstack application credential create --description \"App Creds 2 - Member\" --role member AppCred2",
"[stack@undercloud-0 openstack]USD openstack --os-cloud=openstack token issue",
"+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | expires | 2018-08-29T05:37:29+0000 | | id | gAAAAABbhiMJ4TxxFlTMdsYJpfStsGotPrns0lnpvJq9ILtdi-NKqisWBeNiJlUXwmnoGQDh2CMyK9OeTsuEXnJNmFfKjxiHWmcQVYzAhMKo6_QMUtu_Qm6mtpzYYHBrUGboa_Ay0LBuFDtsjtgtvJ-r8G3TsJMowbKF-yo--O_XLhERU_QQVl3hl8zmMRdmLh_P9Cbhuolt | | project_id | 1a74eabbf05c41baadd716179bb9e1da | | user_id | ef679eeddfd14f8b86becfd7e1dc84f2 | +------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/users_and_identity_management_guide/assembly_application-credentials |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/providing-feedback-on-red-hat-documentation_common |
Chapter 7. Camel K command reference | Chapter 7. Camel K command reference This chapter provides reference details on the Camel K command line interface (CLI), and provides examples of using the kamel command. This chapter also provides reference details on Camel K modeline options that you can specify in a Camel K integration source file, which are executed at runtime. This chapter includes the following sections: Section 7.1, "Camel K command line" Section 7.2, "Camel K modeline options" 7.1. Camel K command line The Camel K CLI provides the kamel command as the main entry point for running Camel K integrations on OpenShift. 7.1.1. Supported commands Note the following key: Symbol Description ✔ Supported ❌ Unsupported or not yet supported Table 7.1. kamel commands Name Supported Description Example bind ✔ Bind Kubernetes resources such as Kamelets, in an integration flow, to Knative channels, Kafka topics, or any other endpoint. kamel bind telegram-source -p "source.authorizationToken=The Token" channel:mychannel completion ❌ Generate completion scripts. kamel completion bash debug ❌ Debug a remote integration using a local debugger. kamel debug my-integration delete ✔ Delete an integration deployed on OpenShift. kamel delete my-integration describe ✔ Get detailed information on a Camel K resource. This includes an integration , kit , or platform . kamel describe integration my-integration get ✔ Get the status of integrations deployed on OpenShift. kamel get help ✔ Get the full list of available commands. You can enter --help as a parameter to each command for more details. kamel help kamel run --help install ❌ Install Camel K on an OpenShift cluster. Note: It is recommended that you use the OpenShift Camel K Operator to install and uninstall Camel K. kamel install kit ❌ Configure an Integration Kit. kamel kit create my-integration --secret log ✔ Print the logs of a running integration. kamel log my-integration promote ✔ You can move an integration from one namespace to another. kamel promote rebuild ✔ Clear the state of one or more integrations causing a rebuild. kamel rebuild my-integration reset ✔ Reset the current Camel K installation. kamel reset run ✔ Run an integration on OpenShift. kamel run MyIntegration.java uninstall ❌ Uninstall Camel K from an OpenShift cluster. Note: It is recommended that you use the OpenShift Camel K Operator to install and uninstall Camel K. kamel uninstall version ✔ Display Camel-K client version. kamel version Additional resources See Installing Camel K 7.2. Camel K modeline options You can use the Camel K modeline to enter configuration options in a Camel K integration source file, which are executed at runtime, for example, using kamel run MyIntegration.java . For more details, see Running Camel K integrations using modeline . All options that are available for the kamel run command, you can specify as modeline options. The following table describes some of the most commonly-used modeline options. Table 7.2. Camel K modeline options Option Description build-property Add a build-time property or build-time properties file. Syntax: [my-key=my-value|file:/path/to/my-conf.properties] config Add a runtime configuration from a Configmap, Secret, or file Syntax: [configmap|secret|file]:name[/key] - name represents the local file path or the ConfigMap/Secret name. - key optionally represents the ConfigMap/Secret key to be filtered. dependency Include an external library (for example, a Maven dependency) Example: dependency=mvn:org.my:app:1.0 env Set an environment variable in the integration container. For example, env=MY_ENV_VAR=my-value . label Add a label for the integration. For example, label=my.company=hello . name Add an integration name. For example, name=my-integration . open-api Add an OpenAPI v2 specification. For example, open-api=path/to/my-hello-api.json . profile Set the Camel K trait profile used for deployment. For example, openshift . property Add a runtime property or a runtime properties file. Syntax: [my-key=my-value|file:/path/to/my-conf.properties]) resource Add a run-time resource from a ConfigMap, Secret or file Syntax: [configmap|secret|file]:name[/key][@path] - name represents the local file path or the ConfigMap/Secret name - key (optional) represents the ConfigMap or Secret key to be filtered s - path (optional) represents the destination path trait Configure a Camel K feature or core capability in a trait. For example, trait=service.enabled=false . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/developing_and_managing_integrations_using_camel_k/camel-k-command-reference |
Chapter 6. Installing the Migration Toolkit for Containers | Chapter 6. Installing the Migration Toolkit for Containers You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4. After you install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager, you manually install the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . After you have installed MTC, you must configure an object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 6.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. Table 6.1. MTC compatibility: Migrating from a legacy platform OpenShift Container Platform 4.5 or earlier OpenShift Container Platform 4.6 or later Stable MTC version MTC 1.7. z Legacy 1.7 operator: Install manually with the operator.yml file. Important This cluster cannot be the control cluster. MTC 1.7. z Install with OLM, release channel release-v1.7 Note Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 6.2. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD sudo podman login registry.redhat.io Download the operator.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Log in to your source cluster. Verify that the cluster can authenticate with registry.redhat.io : USD oc run test --image registry.redhat.io/ubi8 --command sleep infinity Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 6.3. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 6.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.7, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 6.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 6.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 6.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 6.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 6.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 6.4.2.1. NetworkPolicy configuration 6.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 6.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 6.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 6.4.2.3. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 6.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 6.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 6.5. Configuring a replication repository You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. The following storage providers are supported: Multicloud Object Gateway Amazon Web Services S3 Google Cloud Platform Microsoft Azure Blob Generic S3 object storage, for example, Minio or Ceph S3 6.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 6.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Container Storage. Prerequisites You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. You use these credentials to add MCG as a replication repository. 6.5.3. Configuring Amazon Web Services You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the AWS CLI installed. The AWS S3 storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: You must have access to EC2 Elastic Block Storage (EBS). The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Set the BUCKET variable: USD BUCKET=<your_bucket> Set the REGION variable: USD REGION=<your_region> Create an AWS S3 bucket: USD aws s3api create-bucket \ --bucket USDBUCKET \ --region USDREGION \ --create-bucket-configuration LocationConstraint=USDREGION 1 1 us-east-1 does not support a LocationConstraint . If your region is us-east-1 , omit --create-bucket-configuration LocationConstraint=USDREGION . Create an IAM user: USD aws iam create-user --user-name velero 1 1 If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster. Create a velero-policy.json file: USD cat > velero-policy.json <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeVolumes", "ec2:DescribeSnapshots", "ec2:CreateTags", "ec2:CreateVolume", "ec2:CreateSnapshot", "ec2:DeleteSnapshot" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:DeleteObject", "s3:PutObject", "s3:AbortMultipartUpload", "s3:ListMultipartUploadParts" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation", "s3:ListBucketMultipartUploads" ], "Resource": [ "arn:aws:s3:::USD{BUCKET}" ] } ] } EOF Attach the policies to give the velero user the necessary permissions: USD aws iam put-user-policy \ --user-name velero \ --policy-name velero \ --policy-document file://velero-policy.json Create an access key for the velero user: USD aws iam create-access-key --user-name velero Example output { "AccessKey": { "UserName": "velero", "Status": "Active", "CreateDate": "2017-07-31T22:24:41.576Z", "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>, "AccessKeyId": <AWS_ACCESS_KEY_ID> } } Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID . You use the credentials to add AWS as a replication repository. 6.5.4. Configuring Google Cloud Platform You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details. The GCP storage bucket must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to GCP: USD gcloud auth login Set the BUCKET variable: USD BUCKET=<bucket> 1 1 Specify your bucket name. Create the storage bucket: USD gsutil mb gs://USDBUCKET/ Set the PROJECT_ID variable to your active project: USD PROJECT_ID=USD(gcloud config get-value project) Create a service account: USD gcloud iam service-accounts create velero \ --display-name "Velero service account" List your service accounts: USD gcloud iam service-accounts list Set the SERVICE_ACCOUNT_EMAIL variable to match its email value: USD SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list \ --filter="displayName:Velero service account" \ --format 'value(email)') Attach the policies to give the velero user the necessary permissions: USD ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get ) Create the velero.server custom role: USD gcloud iam roles create velero.server \ --project USDPROJECT_ID \ --title "Velero Server" \ --permissions "USD(IFS=","; echo "USD{ROLE_PERMISSIONS[*]}")" Add IAM policy binding to the project: USD gcloud projects add-iam-policy-binding USDPROJECT_ID \ --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL \ --role projects/USDPROJECT_ID/roles/velero.server Update the IAM service account: USD gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET} Save the IAM service account keys to the credentials-velero file in the current directory: USD gcloud iam service-accounts keys create credentials-velero \ --iam-account USDSERVICE_ACCOUNT_EMAIL You use the credentials-velero file to add GCP as a replication repository. 6.5.5. Configuring Microsoft Azure You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC). Prerequisites You must have the Azure CLI installed. The Azure Blob storage container must be accessible to the source and target clusters. If you are using the snapshot copy method: The source and target clusters must be in the same region. The source and target clusters must have the same storage class. The storage class must be compatible with snapshots. Procedure Log in to Azure: USD az login Set the AZURE_RESOURCE_GROUP variable: USD AZURE_RESOURCE_GROUP=Velero_Backups Create an Azure resource group: USD az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1 1 Specify your location. Set the AZURE_STORAGE_ACCOUNT_ID variable: USD AZURE_STORAGE_ACCOUNT_ID="veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" Create an Azure storage account: USD az storage account create \ --name USDAZURE_STORAGE_ACCOUNT_ID \ --resource-group USDAZURE_RESOURCE_GROUP \ --sku Standard_GRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Hot Set the BLOB_CONTAINER variable: USD BLOB_CONTAINER=velero Create an Azure Blob storage container: USD az storage container create \ -n USDBLOB_CONTAINER \ --public-access off \ --account-name USDAZURE_STORAGE_ACCOUNT_ID Create a service principal and credentials for velero : USD AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \ AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \ AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \ --role "Contributor" --query 'password' -o tsv` \ AZURE_CLIENT_ID=`az ad sp list --display-name "velero" \ --query '[0].appId' -o tsv` Save the service principal credentials in the credentials-velero file: USD cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF You use the credentials-velero file to add Azure as a replication repository. 6.5.6. Additional resources MTC workflow About data copy methods Adding a replication repository to the MTC web console 6.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi8 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/installing-3-4 |
Power Management Guide | Power Management Guide Red Hat Enterprise Linux 6 Managing power consumption on Red Hat Enterprise Linux 6 Red Hat Inc. Edited by Marie Dolezelova Red Hat Customer Content Services [email protected] Jaroslav Skarvada Red Hat Developer Experience [email protected] Jana Heves Red Hat Customer Content Services Yoana Ruseva Red Hat Customer Content Services Jack Reed Red Hat Customer Content Services Rudiger Landmann Red Hat Customer Content Services Don Domingo Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/index |
Chapter 4. Disabling monitoring for user-defined projects | Chapter 4. Disabling monitoring for user-defined projects As a dedicated-admin , you can disable monitoring for user-defined projects. You can also exclude individual projects from user workload monitoring. 4.1. Disabling monitoring for user-defined projects By default, monitoring for user-defined projects is enabled. If you do not want to use the built-in monitoring stack to monitor user-defined projects, you can disable it. Prerequisites You logged in to OpenShift Cluster Manager . Procedure From the OpenShift Cluster Manager Hybrid Cloud Console, select a cluster. Click the Settings tab. Click the Enable user workload monitoring check box to unselect the option, and then click Save . User workload monitoring is disabled. The Prometheus, Prometheus Operator, and Thanos Ruler components are stopped in the openshift-user-workload-monitoring project. 4.2. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. | [
"oc label namespace my-project 'openshift.io/user-monitoring=false'",
"oc label namespace my-project 'openshift.io/user-monitoring-'"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/sd-disabling-monitoring-for-user-defined-projects |
Chapter 1. Introduction | Chapter 1. Introduction This document has been created to help Red Hat OpenStack Platform partners in their efforts to integrate solutions with Red Hat OpenStack Platform director as the tool used to install and manage the deployment lifecycle of an OpenStack Platform environment. Integration with the director enables seamless adoption of your technology. You can find broad benefits in an optimization of resources, reduction in deployment times and reduction in lifecycle management costs. Looking forward, OpenStack Platform director integration is a strong move toward providing rich integration with existing enterprise management systems and processes. Within the Red Hat product portfolio, tools such as CloudForms are expected to have visibility into director's integrations and provide broader exposure for management of service deployment. 1.1. Partner Integration Requirements You must meet several prerequisites before meaningful integration work can be completed with the director. These requirements are not limited to technical integration and also include various levels of partner solution documentation. The goal is to create a shared understanding of the entire integration as a basis for Red Hat engineering, partner managers, and support resources to facilitate work. The first requirement is related to Red Hat OpenStack Platform solution certification. To be included with OpenStack Platform director, the partner solution must first be certified with Red Hat OpenStack Platform. OpenStack Plug-in Certification Guides Red Hat OpenStack Certification Policy Guide Red Hat OpenStack Certification Workflow Guide OpenStack Application Certification Guides Red Hat OpenStack Application Policy Guide Red Hat OpenStack Application Workflow Guide OpenStack Bare Metal Certification Guides Red Hat OpenStack Platform Hardware Bare Metal Certification Policy Guide Red Hat OpenStack Platform Hardware Bare Metal Certification Workflow Guide | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/introduction |
Chapter 54. CruiseControlTemplate schema reference | Chapter 54. CruiseControlTemplate schema reference Used in: CruiseControlSpec Property Description deployment Template for Cruise Control Deployment . DeploymentTemplate pod Template for Cruise Control Pods . PodTemplate apiService Template for Cruise Control API Service . InternalServiceTemplate podDisruptionBudget Template for Cruise Control PodDisruptionBudget . PodDisruptionBudgetTemplate cruiseControlContainer Template for the Cruise Control container. ContainerTemplate tlsSidecarContainer The tlsSidecarContainer property has been deprecated. Template for the Cruise Control TLS sidecar container. ContainerTemplate serviceAccount Template for the Cruise Control service account. ResourceTemplate | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-cruisecontroltemplate-reference |
Chapter 3. Planned changes to naming convention for Windows build artifacts | Chapter 3. Planned changes to naming convention for Windows build artifacts From October 2024 onward, Red Hat plans to introduce naming changes for some files that are distributed as part of Red Hat build of OpenJDK releases for Windows Server platforms. These file naming changes will affect both the .zip archive and .msi installers that Red Hat provides for the JDK, JRE and debuginfo packages for Red Hat build of OpenJDK versions 8, 11, and 17. The aim of this change is to adopt a common naming convention that is consistent across all versions of OpenJDK that Red Hat supports. Red Hat build of OpenJDK versions 8, 11, and 17 will be aligned with the naming convention that Red Hat has already adopted for Red Hat build of OpenJDK 21. This means that Red Hat build of OpenJDK 21 will not require any naming changes. These planned changes do not affect the files for the Linux portable builds of any Red Hat build of OpenJDK version. Red Hat build of OpenJDK 8.0.422 is the last release where Red Hat plans to use the old naming convention for Windows artifacts. The following list provides an example of how the planned naming changes will affect each file for future releases of Red Hat build of OpenJDK 8: MSI installer Old file name: java-1.8.0-openjdk-1.8.0. <version> .redhat.windows.x86_64.msi New file name: java-1.8.0-openjdk-1.8.0. <version> .win.x86_64.msi .zip archive for JDK package Old file name: java-1.8.0-openjdk-1.8.0. <version> .redhat.windows.x86_64.zip New file name: java-1.8.0-openjdk-1.8.0. <version> .win.jdk.x86_64.zip .zip archive for for JRE package Old file name: java-1.8.0-openjdk-jre-1.8.0. <version> .redhat.windows.x86_64.zip New file name: java-1.8.0-openjdk-1.8.0. <version> .win.jre.x86_64.zip .zip archive for debuginfo package Old file name: java-1.8.0-openjdk-1.8.0. <version> .redhat.windows.x86_64.debuginfo.zip New file name: java-1.8.0-openjdk-1.8.0. <version> .win.debuginfo.x86_64.zip | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.422/rn-openjdk11024-name-changes-for-windows-attributes_openjdk |
Chapter 5. Securing a service network | Chapter 5. Securing a service network Skupper provides default, built-in security that scales across clusters and clouds. This section describes additional security you can configure. See Securing a service network using policies for information about creating granular policies for each cluster. 5.1. Restricting access to services using a Kubernetes network policy By default, if you expose a service on the service network, that service is also accessible from other namespaces in the cluster. You can avoid this situation when creating a site using the --create-network-policy option. Procedure Create the service network router with a Kubernetes network policy: USD skupper init --create-network-policy Check the site status: USD skupper status The output should be similar to the following: You can now expose services on the service network and those services are not accessible from other namespaces in the cluster. 5.2. Applying TLS to TCP or HTTP2 traffic on the service network By default, the traffic between sites is encrypted, however the traffic between the service pod and the router pod is not encrypted. For services exposed as TCP or HTTP2, the traffic between the pod and the router pod can be encrypted using TLS. Prerequisites Two or more linked sites A TCP or HTTP2 frontend and backend service Procedure Deploy your backend service. Expose your backend deployment on the service network, enabling TLS. For example, if you want to expose a TCP service: USD skupper expose deployment <deployment-name> --port 443 --enable-tls Enabling TLS creates the necessary certificates required for TLS backends and stores them in a secret named skupper-tls-<deployment-name> . Modify the backend deployment to include the generated certificates, for example: ... spec: containers: ... command: ... - "/certs/tls.key" - "/certs/tls.crt" ... volumeMounts: ... - mountPath: /certs name: certs readOnly: true volumes: - name: index-html configMap: name: index-html - name: certs secret: secretName: skupper-tls-<deployment-name> Each site creates the necessary certificates required for TLS clients and stores them in a secret named skupper-service-client . Modify the frontend deployment to include the generated certificates, for example: spec: template: spec: containers: ... volumeMounts: - name: certs mountPath: /tmp/certs/skupper-service-client ... volumes: - name: certs secret: secretName: skupper-service-client Test calling the service from a TLS enabled frontend. | [
"skupper init --create-network-policy",
"skupper status",
"Skupper enabled for namespace 'west'. It is not connected to any other sites.",
"skupper expose deployment <deployment-name> --port 443 --enable-tls",
"spec: containers: command: - \"/certs/tls.key\" - \"/certs/tls.crt\" volumeMounts: - mountPath: /certs name: certs readOnly: true volumes: - name: index-html configMap: name: index-html - name: certs secret: secretName: skupper-tls-<deployment-name>",
"spec: template: spec: containers: volumeMounts: - name: certs mountPath: /tmp/certs/skupper-service-client volumes: - name: certs secret: secretName: skupper-service-client"
]
| https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/built-in-security-options |
7.46. e2fsprogs | 7.46. e2fsprogs 7.46.1. RHBA-2013:0455 - e2fsprogs bug fix update Updated e2fsprogs packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The e2fsprogs packages provide a number of utilities for creating, checking, modifying, and correcting any inconsistencies in the ext2 file systems. Bug Fixes BZ#806137 On a corrupted file system, the "mke2fs -S" command could remove files instead of attempting to recover them. This bug has been fixed; the "mke2fs -S" command writes metadata properly and no longer removes files instead of recovering them. BZ# 813820 The resize2fs(8) man page did not list an ext4 file system as capable of on-line resizing. This omission has been fixed and the resize2fs(8) man page now includes all file systems that can be resized on-line. BZ#858338 A special flag was used to indicate blocks allocated beyond the end of file on an ext4 file system. This flag was sometimes mishandled, resulting in file system corruption. Both the kernel and user space have been reworked to eliminate the use of this flag. Enhancement BZ#824126 Previously, users could use the e2fsck utility on a mounted file system, although it was strongly recommended not to do so. Using the utility on a mounted file system led to file system corruption. With this update, e2fsck opens the file system exclusively and fails when the file system is busy. This behavior avoids possible corruption of the mounted file system. Users of e2fsprogs are advised to upgrade to these updated packages, which fix these bugs and add this enhancement 7.46.2. RHBA-2013:1502 - e2fsprogs bug fix update Updated e2fsprogs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The e2fsprogs packages provide a number of utilities for creating, checking, modifying, and correcting any inconsistencies in the ext2 file systems. Bug Fix BZ# 1023351 The resize2fs utility did not properly handle resizing of an ext4 file system to a smaller size. As a consequence, files containing many extents could become corrupted if they were moved during the resize process. With this update, resize2fs now maintains a consistent extent tree when moving files containing many extents, and such files no longer become corrupted in this scenario. Users of e2fsprogs are advised to upgrade to these updated packages, which fix this bug. 7.46.3. RHBA-2013:0970 - e2fsprogs bug fix update Updated e2fsprogs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The e2fsprogs packages provide a number of utilities for creating, checking, modifying, and correcting any inconsistencies in the ext2 file systems. Bug Fix BZ# 974193 Some ext4 extent tree corruptions were not detected or repaired by e2fsck. Inconsistencies related to overlapping interior or leaf nodes in the extent tree were not detected, and the file system remained in an inconsistent state after an e2fsck. These inconsistencies were then detected by the kernel at run time. e2fsck is now able to detect and repair this class of corruptions in the file system. Users of e2fsprogs are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/e2fsprogs |
Chapter 5. VLAN-aware instances | Chapter 5. VLAN-aware instances In Red Hat OpenStack Services on OpenShift (RHOSO) environments, there are two ways to associate an instance with a VLAN: VLAN trunks or VLAN transparent networks. A trunk is a collection of ports that enables you to route network traffic to multiple VLANs by using tags. Compared to access ports which can only send and receive network traffic on one VLAN, trunks typically offer lower latency and higher bandwidth. In RHOSO environments, a trunk consists of a parent port with one or more subports associated with the parent. VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For example, a project data network can use VLANs or tunneling segmentation (GENEVE), while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately before they are injected to the instance and do not need to be tagged throughout the entire network. With a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are transferred over the network and consumed by the instances on the same VLAN, and ignored by other instances and devices. In a VLAN transparent network, the VLANs are managed in the instance. You do not need to set up the VLAN in the OpenStack Networking Service (neutron). The following table compares certain features of VLAN trunks and VLAN transparent networks: Trunk Transparent Mechanism driver support ML2/OVN ML2/OVN VLAN setup managed by OpenStack Networking Service (neutron) VM instance IP assignment Assigned by DHCP. Configured in the instance. VLAN ID Fixed. Instances must use the VLAN ID configured in the trunk. Flexible. You can set the VLAN ID in the instance. Implementing a trunk for VLAN-tagged traffic consists of the following steps: Create a parent port, and use it to create a trunk. Create one or more subports and associate them with the parent port. Create a sub-interface that tags traffic for the VLAN associated with the subport. When you create an instance, specify the parent port ID as the vNIC for the instance. This section contains the following topics: Section 5.1, "Creating a trunk" Section 5.2, "Adding subports to the trunk" Section 5.3, "Understanding trunk states" Section 5.4, "Configuring an instance to use a trunk" Section 5.5, "Enabling VLAN transparency" 5.1. Creating a trunk In Red Hat OpenStack Services on OpenShift (RHOSO) environments, the first step for making an instance VLAN-aware is to create a trunk. When you create a trunk, you start by first creating a parent port. , using the parent port, you create the trunk. During trunk creation, the RHOSO Networking service (neutron) adds a trunk connection to the parent port. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Identify the network connected to the instances that you want to give access to the trunked VLANs: USD openstack network list -c Name -c Subnets --max-width=55 Sample output +-------------+---------------------------------------+ | Name | Subnets | +-------------+---------------------------------------+ | private | 47d34cf0-0dd2-49bd-a985-67311d80c5c4, | | | 82014d36-9e60-43eb-92fc-74674573f4e8, | | | d7535565-113f-4192-baa6-da21f301f141 | | private2 | 7ee56cef-83c0-40d1-b4e7-5287dae1c23c | | public | 49dda67d-814e-457b-b14b-77ef32935c0f, | | | 6745edd4-d15f-4971-89bf-70307b0ad2f1, | | | cc3f81bb-4d55-4ead-aad4-5362a7ca5b04 | | lb-mgmt-net | 5ca08724-568c-4030-93eb-f2e286570a25 | +-------------+---------------------------------------+ Create the parent trunk port, and attach it to the network that the instances connect to. Example In this example, a port named parent-trunk-port is created on the public network. This port is the parent port, as you can use it to create subports: USD openstack port create --network public parent-trunk-port Sample output Create a trunk using the parent port. Example In this example, the trunk is named trunk1 , and its parent port is named parent-trunk-port : USD openstack network trunk create --parent-port parent-trunk-port trunk1 Sample output Verification View the trunk connection: USD openstack network trunk list --max-width=72 Sample output View the details of the trunk connection: USD openstack network trunk show parent-trunk Sample output steps Proceed to Section 5.2, "Adding subports to the trunk" . Additional resources port create in the Command line interface reference network trunk create in the Command line interface reference network trunk list in the Command line interface reference network trunk show in the Command line interface reference Section 5.3, "Understanding trunk states" 5.2. Adding subports to the trunk In Red Hat OpenStack Services on OpenShift (RHOSO) environments, after you have created the trunk, the step for making an instance VLAN-aware is to create one or more subports. Subports are children of the trunk parent port. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. You have a trunk associated on the network that connects to instances that you want to give access to the trunked VLANs. Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the MAC address of the parent port. Retain this name, because you will need it later: Example USD openstack port show parent-trunk-port --max-width=72 Sample output +-------------------------+--------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-25T20:18:40Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | device_profile | None | | dns_assignment | fqdn='host-10-0-0-236.openstacklocal.', | | | hostname='host-10-0-0-236', | | | ip_address='10.0.0.236' | | | fqdn='host-2002-c000-200-- | | | 64.openstacklocal.', | | | hostname='host-2002-c000-200--64', | | | ip_address='2002:c000:200::64' | | dns_domain | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.236', subnet_id='6745ed | | | d4-d15f-4971-89bf-70307b0ad2f1' | | | ip_address='2002:c000:200::64', subnet_id= | | | '49dda67d-814e-457b-b14b-77ef32935c0f' | | id | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | | ip_allocation | immediate | | mac_address | fa:16:3e:0f:b8:cb | | name | parent-trunk-port | | network_id | bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f | | numa_affinity_policy | None | | port_security_enabled | True | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 1 | | security_group_ids | 9bf70539-31b0-47e5-a0ea-3ee409de0499 | | status | DOWN | | tags | | | trunk_details | {'trunk_id': | | | 'ef2aff85-9e51-43d4-ab28-2ab833f049b3', | | | 'sub_ports': []} | | updated_at | 2024-09-25T20:18:40Z | +-------------------------+--------------------------------------------+ Create a subport of the parent port for the trunk. Example In this example, a port is created, subport1 . By specifying the MAC address assigned to the parent port, fa:16:3e:33:c4:75 , the port created becomes a subport of the parent port: USD openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport1 Sample output +-------------------------+--------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-25T20:19:28Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | device_profile | None | | dns_assignment | fqdn='host-10-0-24-31.openstacklocal.', | | | hostname='host-10-0-24-31', | | | ip_address='10.0.24.31' | | dns_domain | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.24.31', subnet_id='47d34c | | | f0-0dd2-49bd-a985-67311d80c5c4' | | id | 4ce8382f-5efc-4794-83f8-1f89ef7efe68 | | ip_allocation | immediate | | mac_address | fa:16:3e:0f:b8:cb | | name | subport1 | | network_id | 317be3d3-5265-43f7-b52b-930e3fd19b8b | | numa_affinity_policy | None | | port_security_enabled | True | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 1 | | security_group_ids | 9bf70539-31b0-47e5-a0ea-3ee409de0499 | | status | DOWN | | tags | | | trunk_details | None | | updated_at | 2024-09-25T20:19:28Z | +-------------------------+--------------------------------------------+ Note If you receive the error HttpException: Conflict , confirm that you are creating the subport on a different network to the one that has the parent trunk port. This example uses the public network for the parent trunk port, and private for the subport. Associate the port with the trunk. Example In this example, subport1 is associated with trunk1 . The segmentation type is vlan and the segmentation ID, the VLAN ID, is 55 . The type and ID are attributes from the network ( private ) that was used to create subport1 in an earlier command: USD openstack network trunk set --subport port=subport1,\ segmentation-type=vlan,segmentation-id=55 trunk1 steps Proceed to Section 5.4, "Configuring an instance to use a trunk" . Additional resources port create in the Command line interface reference network trunk set in the Command line interface reference Section 5.3, "Understanding trunk states" 5.3. Understanding trunk states The table that follows describes the various state values for trunks in a Red Hat OpenStack Services on OpenShift (RHOSO) environment: Table 5.1. Valid trunk state values State field value Description ACTIVE The trunk is working as expected and there are no current requests. DOWN The virtual and physical resources for the trunk are not in sync. This can be a temporary state during negotiation. BUILD There has been a request and the resources are being provisioned. After successful completion the trunk returns to ACTIVE . DEGRADED The provisioning request did not complete, so the trunk has only been partially provisioned. It is recommended to remove the subports and try again. ERROR The provisioning request was unsuccessful. Remove the resource that caused the error to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as this can cause more issues. Additional resources Section 5.2, "Adding subports to the trunk" Section 5.1, "Creating a trunk" Section 5.4, "Configuring an instance to use a trunk" 5.4. Configuring an instance to use a trunk In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can configure an instance to use a trunk as its connection to a network. Compared to access ports which can only send and receive network traffic on one VLAN, trunks typically offer lower latency and higher bandwidth. You must configure the VM instance operating system to use the MAC address that the RHOSO Networking service (neutron) assigned to the subport. You can also configure the subport to use a specific MAC address during the subport creation step. Prerequisites The administrator has created a project for you and has provided you with a clouds.yaml file for you to access the cloud. The python-openstackclient package resides on your workstation. You have a trunk that you can configure your instances to connect to. For more information, see Section 5.1, "Creating a trunk" . Procedure Confirm that the system OS_CLOUD variable is set for your cloud: USD echo USDOS_CLOUD my_cloud Reset the variable if necessary: USD export OS_CLOUD=my_other_cloud As an alternative, you can specify the cloud name by adding the --os-cloud <cloud_name> option each time you run an openstack command. Obtain the trunk name and the parent port ID that you want to connect your instance to. Retain this information, because you will need it later: Example USD openstack network trunk list -c Name -c "Parent Port" Sample output +--------+--------------------------------------+ | Name | Parent Port | +--------+--------------------------------------+ | trunk1 | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | +--------+--------------------------------------+ Create an instance that uses the parent port-id as its vNIC. Example In this example, an instance, testInstance , is created and connected to the parent port by specifying the port ID, 530ff46e-b285-4ad7-a77a-7dca1fb9174d : USD openstack server create --image cirros --flavor m1.tiny \ --security-group default --key-name sshaccess \ --nic port-id=530ff46e-b285-4ad7-a77a-7dca1fb9174d testInstance Sample output +--------------------------------------+---------------------------------+ | Property | Value | +--------------------------------------+---------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | testinstance | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-juqco0el | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | uMyL8PnZRBwQ | | config_drive | | | created | 2024-09-25T20:22:19Z | | description | - | | flavor | m1.tiny (1) | | hostId | | | host_status | | | id | 88b7aede-1305-4d91-a180-67e7eac | | | 8b70d | | image | cirros (568372f7-15df-4e61-a05f | | | -10954f79a3c4) | | key_name | sshaccess | | locked | False | | metadata | {} | | name | testInstance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 745d33000ac74d30a77539f8920555e | | | 7 | | updated | 2024-09-25T20:22:19Z | | user_id | 8c4aea738d774967b4ef388eb41fef5 | | | e | +--------------------------------------+---------------------------------+ Additional resources network trunk create in the Command line interface reference network trunk list in the Command line interface reference network trunk show in the Command line interface reference 5.5. Enabling VLAN transparency In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you can enable VLAN transparency if you need to send VLAN tagged traffic between virtual machine (VM) instances. In a VLAN transparent network you can configure the VLANS directly in the VMs without configuring them in the RHOSO Networking service (neutron). Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Provider network of type Local, VLAN, VXLAN, or GENEVE. Do not use VLAN transparency in deployments with flat type provider networks. Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN ethertype set to 0x88A8 or 0x9100. Procedure Create a YAML file and add the following content: Apply the updated OpenStackControlPlane CR configuration: USD oc apply -f <control_plane_update.yaml> Replace <control_plane_update.yaml> with the name of the YAML file that contains your update. Wait until Red Hat OpenShift Container Platform (RHOCP) creates the resources related to the OpenStackControlPlane CR. Check the status of the control plane deployment: USD oc get openstackcontrolplane -n OPENSTACK Sample output NAME STATUS MESSAGE openstack-control-plane Unknown Setup started The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the get command to track deployment progress. Access the remote shell for the OpenStackClient pod from your workstation: Confirm that the Networking service has successfully loaded the vlan_transparent extension: USD openstack extension list --network --max-width=72 \ | grep vlan-transparent Sample output If the Networking service has successfully loaded the loaded the vlan_transparent extension, you should see output similar to the following: | Vlantransparent | vlan-transparent | Provides Vlan | | | | Transparent Networks | Create the network using the --transparent-vlan argument. Example USD openstack network create <network-name> --transparent-vlan Replace <network-name> with the name of the network that you are creating. Exit the openstackclient pod: USD exit Set up a VLAN interface on each participating VM. Set the interface MTU to 4 bytes less than the MTU of the underlay network to accommodate the extra tagging required by VLAN transparency. For example, if the underlay network MTU is 1500 , set the interface MTU to 1496 . The following example command adds a VLAN interface on eth0 with an MTU of 1496 . The VLAN is 50 and the interface name is vlan50 : Example USD ip link add link eth0 name vlan50 type vlan id 50 mtu 1496 USD ip link set vlan50 up USD ip addr add 192.128.111.3/24 dev vlan50 Access the remote shell for the OpenStackClient pod from your workstation: Set --allowed-address on the VM port. Set the allowed address to the IP address you created on the VLAN interface inside the VM. Optionally, you can also set the VLAN interface MAC address. Note An alternative to setting an allowed address pair, is to disable port security on the port by using the port set --disable-port-security command. Example The following example sets the IP address to 192.128.111.3 with the optional MAC address 00:40:96:a8:45:c4 on port fv82gwk3-qq2e-yu93-go31-56w7sf476mm0 : USD openstack port set --allowed-address ip-address=192.128.111.3,\ mac-address=00:40:96:a8:45:c4 fv82gwk3-qq2e-yu93-go31-56w7sf476mm0 Exit the openstackclient pod: USD exit Verification Ping between two VMs on the VLAN using the VLAN interface name IP address that you set in an earlier step, for example, vlan50 . Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact. Additional resources network create in the Command line interface reference port set in the Command line interface reference | [
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network list -c Name -c Subnets --max-width=55",
"+-------------+---------------------------------------+ | Name | Subnets | +-------------+---------------------------------------+ | private | 47d34cf0-0dd2-49bd-a985-67311d80c5c4, | | | 82014d36-9e60-43eb-92fc-74674573f4e8, | | | d7535565-113f-4192-baa6-da21f301f141 | | private2 | 7ee56cef-83c0-40d1-b4e7-5287dae1c23c | | public | 49dda67d-814e-457b-b14b-77ef32935c0f, | | | 6745edd4-d15f-4971-89bf-70307b0ad2f1, | | | cc3f81bb-4d55-4ead-aad4-5362a7ca5b04 | | lb-mgmt-net | 5ca08724-568c-4030-93eb-f2e286570a25 | +-------------+---------------------------------------+",
"openstack port create --network public parent-trunk-port",
"+-------------------------+--------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-25T20:18:40Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | device_profile | None | | dns_assignment | fqdn='host-10-0-0-236.openstacklocal.', | | | hostname='host-10-0-0-236', | | | ip_address='10.0.0.236' | | | fqdn='host-2002-c000-200-- | | | 64.openstacklocal.', | | | hostname='host-2002-c000-200--64', | | | ip_address='2002:c000:200::64' | | dns_domain | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.236', subnet_id='6745ed | | | d4-d15f-4971-89bf-70307b0ad2f1' | | | ip_address='2002:c000:200::64', subnet_id= | | | '49dda67d-814e-457b-b14b-77ef32935c0f' | | id | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | | ip_allocation | immediate | | mac_address | fa:16:3e:0f:b8:cb | | name | parent-trunk-port | | network_id | bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f | | numa_affinity_policy | None | | port_security_enabled | True | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 1 | | security_group_ids | 9bf70539-31b0-47e5-a0ea-3ee409de0499 | | status | DOWN | | tags | | | trunk_details | {'trunk_id': | | | 'ef2aff85-9e51-43d4-ab28-2ab833f049b3', | | | 'sub_ports': []} | | updated_at | 2024-09-25T20:18:40Z | +-------------------------+--------------------------------------------+",
"openstack network trunk create --parent-port parent-trunk-port trunk1",
"+-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2024-09-25T20:19:43Z | | description | | | id | ef2aff85-9e51-43d4-ab28-2ab833f049b3 | | name | trunk1 | | port_id | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | revision_number | 1 | | status | ACTIVE | | sub_ports | | | tags | [] | | tenant_id | 24089d2fe1a94dd29ca2f665794fbe92 | | updated_at | 2024-09-25T20:19:43Z | +-----------------+--------------------------------------+",
"openstack network trunk list --max-width=72",
"+--------------------+--------------+--------------------+-------------+ | ID | Name | Parent Port | Description | +--------------------+--------------+--------------------+-------------+ | ef2aff85-9e51-43d4 | parent-trunk | 530ff46e-b285-4ad7 | | | -ab28-2ab833f049b3 | | -a77a-7dca1fb9174d | | +--------------------+--------------+--------------------+-------------+",
"openstack network trunk show parent-trunk",
"+-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | UP | | created_at | 2024-09-25T20:19:43Z | | description | | | id | ef2aff85-9e51-43d4-ab28-2ab833f049b3 | | name | trunk1 | | port_id | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | revision_number | 1 | | status | ACTIVE | | sub_ports | | | tags | [] | | tenant_id | 24089d2fe1a94dd29ca2f665794fbe92 | | updated_at | 2024-09-25T20:19:43Z | +-----------------+--------------------------------------+",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack port show parent-trunk-port --max-width=72",
"+-------------------------+--------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-25T20:18:40Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | device_profile | None | | dns_assignment | fqdn='host-10-0-0-236.openstacklocal.', | | | hostname='host-10-0-0-236', | | | ip_address='10.0.0.236' | | | fqdn='host-2002-c000-200-- | | | 64.openstacklocal.', | | | hostname='host-2002-c000-200--64', | | | ip_address='2002:c000:200::64' | | dns_domain | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.0.236', subnet_id='6745ed | | | d4-d15f-4971-89bf-70307b0ad2f1' | | | ip_address='2002:c000:200::64', subnet_id= | | | '49dda67d-814e-457b-b14b-77ef32935c0f' | | id | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | | ip_allocation | immediate | | mac_address | fa:16:3e:0f:b8:cb | | name | parent-trunk-port | | network_id | bcdb3cc0-8c0b-4d2d-813c-e141bb97aa8f | | numa_affinity_policy | None | | port_security_enabled | True | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 1 | | security_group_ids | 9bf70539-31b0-47e5-a0ea-3ee409de0499 | | status | DOWN | | tags | | | trunk_details | {'trunk_id': | | | 'ef2aff85-9e51-43d4-ab28-2ab833f049b3', | | | 'sub_ports': []} | | updated_at | 2024-09-25T20:18:40Z | +-------------------------+--------------------------------------------+",
"openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport1",
"+-------------------------+--------------------------------------------+ | Field | Value | +-------------------------+--------------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2024-09-25T20:19:28Z | | data_plane_status | None | | description | | | device_id | | | device_owner | | | device_profile | None | | dns_assignment | fqdn='host-10-0-24-31.openstacklocal.', | | | hostname='host-10-0-24-31', | | | ip_address='10.0.24.31' | | dns_domain | | | dns_name | | | extra_dhcp_opts | | | fixed_ips | ip_address='10.0.24.31', subnet_id='47d34c | | | f0-0dd2-49bd-a985-67311d80c5c4' | | id | 4ce8382f-5efc-4794-83f8-1f89ef7efe68 | | ip_allocation | immediate | | mac_address | fa:16:3e:0f:b8:cb | | name | subport1 | | network_id | 317be3d3-5265-43f7-b52b-930e3fd19b8b | | numa_affinity_policy | None | | port_security_enabled | True | | project_id | 24089d2fe1a94dd29ca2f665794fbe92 | | propagate_uplink_status | None | | qos_network_policy_id | None | | qos_policy_id | None | | resource_request | None | | revision_number | 1 | | security_group_ids | 9bf70539-31b0-47e5-a0ea-3ee409de0499 | | status | DOWN | | tags | | | trunk_details | None | | updated_at | 2024-09-25T20:19:28Z | +-------------------------+--------------------------------------------+",
"openstack network trunk set --subport port=subport1, segmentation-type=vlan,segmentation-id=55 trunk1",
"dnf list installed python-openstackclient",
"echo USDOS_CLOUD my_cloud",
"export OS_CLOUD=my_other_cloud",
"openstack network trunk list -c Name -c \"Parent Port\"",
"+--------+--------------------------------------+ | Name | Parent Port | +--------+--------------------------------------+ | trunk1 | 530ff46e-b285-4ad7-a77a-7dca1fb9174d | +--------+--------------------------------------+",
"openstack server create --image cirros --flavor m1.tiny --security-group default --key-name sshaccess --nic port-id=530ff46e-b285-4ad7-a77a-7dca1fb9174d testInstance",
"+--------------------------------------+---------------------------------+ | Property | Value | +--------------------------------------+---------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | testinstance | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-juqco0el | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | uMyL8PnZRBwQ | | config_drive | | | created | 2024-09-25T20:22:19Z | | description | - | | flavor | m1.tiny (1) | | hostId | | | host_status | | | id | 88b7aede-1305-4d91-a180-67e7eac | | | 8b70d | | image | cirros (568372f7-15df-4e61-a05f | | | -10954f79a3c4) | | key_name | sshaccess | | locked | False | | metadata | {} | | name | testInstance | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tags | [] | | tenant_id | 745d33000ac74d30a77539f8920555e | | | 7 | | updated | 2024-09-25T20:22:19Z | | user_id | 8c4aea738d774967b4ef388eb41fef5 | | | e | +--------------------------------------+---------------------------------+",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack-control-plane spec: neutron: template: customServiceConfig: | [DEFAULT] vlan_transparent = true",
"oc apply -f <control_plane_update.yaml>",
"oc get openstackcontrolplane -n OPENSTACK",
"NAME STATUS MESSAGE openstack-control-plane Unknown Setup started",
"oc rsh -n openstack openstackclient",
"openstack extension list --network --max-width=72 | grep vlan-transparent",
"| Vlantransparent | vlan-transparent | Provides Vlan | | | | Transparent Networks |",
"openstack network create <network-name> --transparent-vlan",
"exit",
"ip link add link eth0 name vlan50 type vlan id 50 mtu 1496 ip link set vlan50 up ip addr add 192.128.111.3/24 dev vlan50",
"oc rsh -n openstack openstackclient",
"openstack port set --allowed-address ip-address=192.128.111.3, mac-address=00:40:96:a8:45:c4 fv82gwk3-qq2e-yu93-go31-56w7sf476mm0",
"exit"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/managing_networking_resources/vlan-aware-instances_rhoso-mngnet |
Chapter 4. Configuring Red Hat OpenStack Platform director for Service Telemetry Framework | Chapter 4. Configuring Red Hat OpenStack Platform director for Service Telemetry Framework To collect metrics, events, or both, and to send them to the Service Telemetry Framework (STF) storage domain, you must configure the Red Hat OpenStack Platform (RHOSP) overcloud to enable data collection and transport. STF can support both single and multiple clouds. The default configuration in RHOSP and STF set up for a single cloud installation. For a single RHOSP overcloud deployment with default configuration, see Section 4.1, "Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director" . To plan your RHOSP installation and configuration STF for multiple clouds, see Section 4.3, "Configuring multiple clouds" . As part of an RHOSP overcloud deployment, you might need to configure additional features in your environment: To disable the data collector services, see Section 4.2, "Disabling Red Hat OpenStack Platform services used with Service Telemetry Framework" . 4.1. Deploying Red Hat OpenStack Platform overcloud for Service Telemetry Framework using director As part of the Red Hat OpenStack Platform (RHOSP) overcloud deployment using director, you must configure the data collectors and the data transport to Service Telemetry Framework (STF). Procedure Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" Retrieving the AMQ Interconnect route address Creating the base configuration for STF Configuring the STF connection for the overcloud Deploying the overcloud Validating client-side installation Additional resources For more information about deploying an OpenStack cloud using director, see Director Installation and Usage . To collect data through AMQ Interconnect, see the amqp1 plug-in . 4.1.1. Getting CA certificate from Service Telemetry Framework for overcloud configuration To connect your Red Hat OpenStack Platform (RHOSP) overcloud to Service Telemetry Framework (STF), retrieve the CA certificate of AMQ Interconnect that runs within STF and use the certificate in RHOSP configuration. Procedure View a list of available certificates in STF: USD oc get secrets Retrieve and note the content of the default-interconnect-selfsigned Secret: USD oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\.crt}' | base64 -d 4.1.2. Retrieving the AMQ Interconnect route address When you configure the Red Hat OpenStack Platform (RHOSP) overcloud for Service Telemetry Framework (STF), you must provide the AMQ Interconnect route address in the STF connection file. Procedure Log in to your Red Hat OpenShift Container Platform environment where STF is hosted. Change to the service-telemetry project: USD oc project service-telemetry Retrieve the AMQ Interconnect route address: USD oc get routes -ogo-template='{{ range .items }}{{printf "%s\n" .spec.host }}{{ end }}' | grep "\-5671" default-interconnect-5671-service-telemetry.apps.infra.watch 4.1.3. Creating the base configuration for STF To configure the base parameters to provide a compatible data collection and transport for Service Telemetry Framework (STF), you must create a file that defines the default data collection values. Procedure Log in to the undercloud host as the stack user. Create a configuration file called enable-stf.yaml in the /home/stack directory. Important Setting EventPipelinePublishers and PipelinePublishers to empty lists results in no event or metric data passing to RHOSP telemetry components, such as Gnocchi or Panko. If you need to send data to additional pipelines, the Ceilometer polling interval of 30 seconds, that you specify in ExtraConfig , might overwhelm the RHOSP telemetry components. You must increase the interval to a larger value, such as 300 , which results in less telemetry resolution in STF. enable-stf.yaml parameter_defaults: # only send to STF, not other publishers EventPipelinePublishers: [] PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true # enable Ceilometer metrics and events CeilometerQdrPublishMetrics: true CeilometerQdrPublishEvents: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 5 CollectdDefaultPollingInterval: 5 CollectdExtraPlugins: - vmem # set standard prefixes for where metrics and events are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - disk.* - ip.* - image.* - memory - memory.* - network.services.vpn.* - network.services.firewall.* - perf.* - port - port.* - switch - switch.* - storage.* - volume.* # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # receive extra information about virtual memory collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: "name uuid hostname" # provide the human-friendly name of the virtual instance collectd::plugin::virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: "%{hiera('fqdn_canonical')}" port: 11211 4.1.4. Configuring the STF connection for the overcloud To configure the Service Telemetry Framework (STF) connection, you must create a file that contains the connection configuration of the AMQ Interconnect for the overcloud to the STF deployment. Enable the collection of events and storage of the events in STF and deploy the overcloud. The default configuration is for a single cloud instance with the default message bus topics. For configuration of multiple cloud deployments, see Section 4.3, "Configuring multiple clouds" . Prerequisites Retrieve the CA certificate from the AMQ Interconnect deployed by STF. For more information, see Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Retrieve the AMQ Interconnect route address. For more information, see Section 4.1.2, "Retrieving the AMQ Interconnect route address" . Procedure Log in to the undercloud host as the stack user. Create a configuration file called stf-connectors.yaml in the /home/stack directory. In the stf-connectors.yaml file, configure the MetricsQdrConnectors address to connect the AMQ Interconnect on the overcloud to the STF deployment. You configure the topic addresses for Sensubility, Ceilometer, and collectd in this file to match the defaults in STF. For more information about customizing topics and cloud configuration, see Section 4.3, "Configuring multiple clouds" . stf-connectors.yaml resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrEventsConfig: driver: amqp topic: cloud1-event CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-notify: notify: true format: JSON presettle: false cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments. Replace the host parameter with the value that you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect route address" . Replace the caCertFileContent parameter with the contents retrieved in Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect route address" . Set topic value of CeilometerQdrEventsConfig to define the topic for Ceilometer events. The value is a unique topic idenifier for the cloud such as cloud1-event . Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering . Set CollectdAmqpInstances sub-paramter to define the topic for collectd events. The section name is a unique topic identifier for the cloud such as cloud1-notify . Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry . Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry . Note When you define the topics for collectd and Ceilometer, the value you provide is transposed into the full topic that the Smart Gateway client uses to listen for messages. Ceilometer topic values are transposed into the topic address anycast/ceilometer/<TOPIC>.sample and collectd topic values are transposed into the topic address collectd/<TOPIC> . The value for sensubility is the full topic path and has no transposition from topic value to topic address. For an example of a cloud configuration in the ServiceTelemetry object referring to the full topic address, see the section called "The clouds parameter" . 4.1.5. Deploying the overcloud Deploy or update the overcloud with the required environment files so that data is collected and transmitted to Service Telemetry Framework (STF). Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Add your data collection and AMQ Interconnect environment files to the stack with your other environment files and deploy the overcloud: (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml \ -e /home/stack/enable-stf.yaml \ -e /home/stack/stf-connectors.yaml Include the ceilometer-write-qdr.yaml file to ensure that Ceilometer telemetry and events are sent to STF. Include the qdr-edge-only.yaml file to ensure that the message bus is enabled and connected to STF message bus routers. Include the enable-stf.yaml environment file to ensure that the defaults are configured correctly. Include the stf-connectors.yaml environment file to define the connection to STF. 4.1.6. Validating client-side installation To validate data collection from the Service Telemetry Framework (STF) storage domain, query the data sources for delivered data. To validate individual nodes in the Red Hat OpenStack Platform (RHOSP) deployment, use SSH to connect to the console. Tip Some telemetry data is available only when RHOSP has active workloads. Procedure Log in to an overcloud node, for example, controller-0. Ensure that the metrics_qdr and collection agent containers are running on the node: USD sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_notification ceilometer_agent_central running running running running Note Use this command on compute nodes: USD sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_compute Return the internal network address on which AMQ Interconnect is running, for example, 172.17.1.44 listening on port 5666 : USD sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS } Return a list of connections to the local AMQ Interconnect: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 default-interconnect-5671-service-telemetry.apps.infra.watch:443 default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth There are four connections: Outbound connection to STF Inbound connection from ceilometer Inbound connection from collectd Inbound connection from our qdstat client The outbound STF connection is provided to the MetricsQdrConnectors host parameter and is the route for the STF storage domain. The other hosts are internal network addresses of the client connections to this AMQ Interconnect. To ensure that messages are delivered, list the links, and view the _edge address in the deliv column for delivery of messages: USD sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 0 0 0 0 2979926 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 0 0 0 0 2979924 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0 To list the addresses from RHOSP nodes to STF, connect to Red Hat OpenShift Container Platform to retrieve the AMQ Interconnect pod name and list the connections. List the available AMQ Interconnect pods: USD oc get pods -l application=default-interconnect NAME READY STATUS RESTARTS AGE default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h Connect to the pod and list the known connections. In this example, there are three edge connections from the RHOSP nodes with connection id 22, 23, and 24: USD oc exec -it default-interconnect-7458fd4d69-bgzfb -- qdstat --connections 2020-04-21 18:25:47.243852 UTC default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication tenant last dlv uptime =============================================================================================================================================================================================== 5 10.129.0.110:48498 bridge-3f5 edge in no-security anonymous-user 000:00:00:02 000:17:36:29 6 10.129.0.111:43254 rcv[default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn] edge in no-security anonymous-user 000:00:00:02 000:17:36:20 7 10.130.0.109:50518 rcv[default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd] normal in no-security anonymous-user - 000:17:36:11 8 10.130.0.110:33802 rcv[default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82] normal in no-security anonymous-user 000:01:26:18 000:17:36:05 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00 To view the number of messages delivered by the network, use each address with the oc exec command: USD oc exec -it default-interconnect-7458fd4d69-bgzfb -- qdstat --address 2020-04-21 18:20:10.293258 UTC default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ========================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 970 970 0 0 mobile anycast/ceilometer/metering.sample 0 balanced - 1 0 2,344,833 2,344,833 0 0 mobile collectd/notify 0 multicast - 1 0 70 70 0 0 mobile collectd/telemetry 0 multicast - 1 0 216,128,890 216,128,890 0 0 4.2. Disabling Red Hat OpenStack Platform services used with Service Telemetry Framework Disable the services used when deploying Red Hat OpenStack Platform (RHOSP) and connecting it to Service Telemetry Framework (STF). There is no removal of logs or generated configuration files as part of the disablement of the services. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Create the disable-stf.yaml environment file: USD cat > ~/disable-stf.yaml <<EOF --- resource_registry: OS::TripleO::Services::CeilometerAgentCentral: OS::Heat::None OS::TripleO::Services::CeilometerAgentNotification: OS::Heat::None OS::TripleO::Services::CeilometerAgentIpmi: OS::Heat::None OS::TripleO::Services::ComputeCeilometerAgent: OS::Heat::None OS::TripleO::Services::Redis: OS::Heat::None OS::TripleO::Services::Collectd: OS::Heat::None OS::TripleO::Services::MetricsQdr: OS::Heat::None EOF Remove the following files from your RHOSP director deployment: ceilometer-write-qdr.yaml qdr-edge-only.yaml enable-stf.yaml stf-connectors.yaml Update the RHOSP overcloud. Ensure that you use the disable-stf.yaml file early in the list of environment files. By adding disable-stf.yaml early in the list, other environment files can override the configuration that would disable the service: (undercloud)USD openstack overcloud deploy --templates \ -e /home/stack/disable-stf.yaml \ -e [your environment files] 4.3. Configuring multiple clouds You can configure multiple Red Hat OpenStack Platform (RHOSP) clouds to target a single instance of Service Telemetry Framework (STF). When you configure multiple clouds, every cloud must send metrics and events on their own unique message bus topic. In the STF deployment, Smart Gateway instances listen on these topics to save information to the common data store. Data that is stored by the Smart Gateway in the data storage domain is filtered by using the metadata that each of Smart Gateways creates. Figure 4.1. Two RHOSP clouds connect to STF To configure the RHOSP overcloud for a multiple cloud scenario, complete the following tasks: Plan the AMQP address prefixes that you want to use for each cloud. For more information, see Section 4.3.1, "Planning AMQP address prefixes" . Deploy metrics and events consumer Smart Gateways for each cloud to listen on the corresponding address prefixes. For more information, see Section 4.3.2, "Deploying Smart Gateways" . Configure each cloud with a unique domain name. For more information, see Section 4.3.4, "Setting a unique cloud domain" . Create the base configuration for STF. For more information, see Section 4.1.3, "Creating the base configuration for STF" . Configure each cloud to send its metrics and events to STF on the correct address. For more information, see Section 4.3.5, "Creating the Red Hat OpenStack Platform environment file for multiple clouds" . 4.3.1. Planning AMQP address prefixes By default, Red Hat OpenStack Platform (RHOSP) nodes receive data through two data collectors; collectd and Ceilometer. The collectd-sensubility plugin requires a unique address. These components send telemetry data or notifications to the respective AMQP addresses, for example, collectd/telemetry . STF Smart Gateways listen on those AMQP addresses for data. To support multiple clouds and to identify which cloud generated the monitoring data, configure each cloud to send data to a unique address. Add a cloud identifier prefix to the second part of the address. The following list shows some example addresses and identifiers: collectd/cloud1-telemetry collectd/cloud1-notify sensubility/cloud1-telemetry anycast/ceilometer/cloud1-metering.sample anycast/ceilometer/cloud1-event.sample collectd/cloud2-telemetry collectd/cloud2-notify sensubility/cloud2-telemetry anycast/ceilometer/cloud2-metering.sample anycast/ceilometer/cloud2-event.sample collectd/us-east-1-telemetry collectd/us-west-3-telemetry 4.3.2. Deploying Smart Gateways You must deploy a Smart Gateway for each of the data collection types for each cloud; one for collectd metrics, one for collectd events, one for Ceilometer metrics, one for Ceilometer events, and one for collectd-sensubility metrics. Configure each of the Smart Gateways to listen on the AMQP address that you define for the corresponding cloud. To define Smart Gateways, configure the clouds parameter in the ServiceTelemetry manifest. When you deploy STF for the first time, Smart Gateway manifests are created that define the initial Smart Gateways for a single cloud. When you deploy Smart Gateways for multiple cloud support, you deploy multiple Smart Gateways for each of the data collection types that handle the metrics and the events data for each cloud. The initial Smart Gateways are defined in cloud1 with the following subscription addresses: collector type default subscription address collectd metrics collectd/telemetry collectd events collectd/notify collectd-sensubility metrics sensubility/telemetry Ceilometer metrics anycast/ceilometer/metering.sample Ceilometer events anycast/ceilometer/event.sample Prerequisites You have determined your cloud naming scheme. For more information about determining your naming scheme, see Section 4.3.1, "Planning AMQP address prefixes" . You have created your list of clouds objects. For more information about creating the content for the clouds parameter, see the section called "The clouds parameter" . Procedure Log in to Red Hat OpenShift Container Platform. Change to the service-telemetry namespace: USD oc project service-telemetry Edit the default ServiceTelemetry object and add a clouds parameter with your configuration: Warning Long cloud names might exceed the maximum pod name of 63 characters. Ensure that the combination of the ServiceTelemetry name default and the clouds.name does not exceed 19 characters. Cloud names cannot contain any special characters, such as - . Limit cloud names to alphanumeric (a-z, 0-9). Topic addresses have no character limitation and can be different from the clouds.name value. USD oc edit stf default apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: ... spec: ... clouds: - name: cloud1 events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - name: cloud2 events: ... Save the ServiceTelemetry object. Verify that each Smart Gateway is running. This can take several minutes depending on the number of Smart Gateways: USD oc get po -l app=smart-gateway NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82 2/2 Running 0 13h default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn 2/2 Running 0 13h default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd 2/2 Running 0 13h default-cloud1-coll-meter-smartgateway-7c6fc495c4-jn728 2/2 Running 0 13h default-cloud1-sens-meter-smartgateway-8h4tc445a2-mm683 2/2 Running 0 13h 4.3.3. Deleting the default Smart Gateways After you configure Service Telemetry Framework (STF) for multiple clouds, you can delete the default Smart Gateways if they are no longer in use. The Service Telemetry Operator can remove SmartGateway objects that were created but are no longer listed in the ServiceTelemetry clouds list of objects. To enable the removal of SmartGateway objects that are not defined by the clouds parameter, you must set the cloudsRemoveOnMissing parameter to true in the ServiceTelemetry manifest. Tip If you do not want to deploy any Smart Gateways, define an empty clouds list by using the clouds: [] parameter. Warning The cloudsRemoveOnMissing parameter is disabled by default. If you enable the cloudsRemoveOnMissing parameter, you remove any manually-created SmartGateway objects in the current namespace without any possibility to restore. Procedure Define your clouds parameter with the list of cloud objects that you want the Service Telemetry Operator to manage. For more information, see the section called "The clouds parameter" . Edit the ServiceTelemetry object and add the cloudsRemoveOnMissing parameter: apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: ... spec: ... cloudsRemoveOnMissing: true clouds: ... Save the modifications. Verify that the Operator deleted the Smart Gateways. This can take several minutes while the Operators reconcile the changes: USD oc get smartgateways 4.3.4. Setting a unique cloud domain To ensure that AMQ Interconnect router connections from Red Hat OpenStack Platform (RHOSP) to Service Telemetry Framework (STF) are unique and do not conflict, configure the CloudDomain parameter. Warning Ensure that you do not change host or domain names in an existing deployment. Host and domain name configuration is supported in new cloud deployments only. Procedure Create a new environment file, for example, hostnames.yaml . Set the CloudDomain parameter in the environment file, as shown in the following example: hostnames.yaml parameter_defaults: CloudDomain: newyork-west-04 CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%' ComputeHostnameFormat: 'compute-%index%' Add the new environment file to your deployment. Additional resources Section 4.3.5, "Creating the Red Hat OpenStack Platform environment file for multiple clouds" Core Overcloud Parameters in the Overcloud Parameters guide 4.3.5. Creating the Red Hat OpenStack Platform environment file for multiple clouds To label traffic according to the cloud of origin, you must create a configuration with cloud-specific instance names. Create an stf-connectors.yaml file and adjust the values of CeilometerQdrEventsConfig , CeilometerQdrMetricsConfig and CollectdAmqpInstances to match the AMQP address prefix scheme. Note If you enabled container health and API status monitoring, you must also modify the CollectdSensubilityResultsChannel parameter. For more information, see Section 5.9, "Red Hat OpenStack Platform API status and containerized services health" . Prerequisites You have retrieved the CA certificate from the AMQ Interconnect deployed by STF. For more information, see Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . You have created your list of clouds objects. For more information about creating the content for the clouds parameter, see the clouds configuration parameter . You have retrieved the AMQ Interconnect route address. For more information, see Section 4.1.2, "Retrieving the AMQ Interconnect route address" . You have created the base configuration for STF. For more information, see Section 4.1.3, "Creating the base configuration for STF" . You have created a unique domain name environment file. For more information, see Section 4.3.4, "Setting a unique cloud domain" . Procedure Log in to the undercloud host as the stack user. Create a configuration file called stf-connectors.yaml in the /home/stack directory. In the stf-connectors.yaml file, configure the MetricsQdrConnectors address to connect to the AMQ Interconnect on the overcloud deployment. Configure the CeilometerQdrEventsConfig , CeilometerQdrMetricsConfig , CollectdAmqpInstances , and CollectdSensubilityResultsChannel topic values to match the AMQP address that you want for this cloud deployment. stf-connectors.yaml resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrEventsConfig: driver: amqp topic: cloud1-event CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-notify: notify: true format: JSON presettle: false cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry The resource_registry configuration directly loads the collectd service because you do not include the collectd-write-qdr.yaml environment file for multiple cloud deployments. Replace the host parameter with the value that you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect route address" . Replace the caCertFileContent parameter with the contents retrieved in Section 4.1.1, "Getting CA certificate from Service Telemetry Framework for overcloud configuration" . Replace the host sub-parameter of MetricsQdrConnectors with the value that you retrieved in Section 4.1.2, "Retrieving the AMQ Interconnect route address" . Set topic value of CeilometerQdrEventsConfig to define the topic for Ceilometer events. The value is a unique topic idenifier for the cloud such as cloud1-event . Set topic value of CeilometerQdrMetricsConfig.topic to define the topic for Ceilometer metrics. The value is a unique topic identifier for the cloud such as cloud1-metering . Set CollectdAmqpInstances sub-paramter to define the topic for collectd events. The section name is a unique topic identifier for the cloud such as cloud1-notify . Set CollectdAmqpInstances sub-parameter to define the topic for collectd metrics. The section name is a unique topic identifier for the cloud such as cloud1-telemetry . Set CollectdSensubilityResultsChannel to define the topic for collectd-sensubility events. The value is a unique topic identifier for the cloud such as sensubility/cloud1-telemetry . Note When you define the topics for collectd and Ceilometer, the value you provide is transposed into the full topic that the Smart Gateway client uses to listen for messages. Ceilometer topic values are transposed into the topic address anycast/ceilometer/<TOPIC>.sample and collectd topic values are transposed into the topic address collectd/<TOPIC> . The value for sensubility is the full topic path and has no transposition from topic value to topic address. For an example of a cloud configuration in the ServiceTelemetry object referring to the full topic address, see the section called "The clouds parameter" . Ensure that the naming convention in the stf-connectors.yaml file aligns with the spec.bridge.amqpUrl field in the Smart Gateway configuration. For example, configure the CeilometerQdrEventsConfig.topic field to a value of cloud1-event . Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source stackrc Include the stf-connectors.yaml file and unique domain name environment file hostnames.yaml in the openstack overcloud deployment command, with any other environment files relevant to your environment: Warning If you use the collectd-write-qdr.yaml file with a custom CollectdAmqpInstances parameter, data publishes to the custom and default topics. In a multiple cloud environment, the configuration of the resource_registry parameter in the stf-connectors.yaml file loads the collectd service. (undercloud)USD openstack overcloud deploy --templates \ -e [your environment files] \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml \ -e /home/stack/hostnames.yaml \ -e /home/stack/enable-stf.yaml \ -e /home/stack/stf-connectors.yaml Deploy the Red Hat OpenStack Platform overcloud. 4.3.5.1. Ansible-based deployment of Service Telemetry Framework Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. As of Red Hat OpenStack Platform 17.0, you can preview the use of Ansible instead of Puppet for deploying Service Telemetry Framework (STF) components. The use of Ansible has the following advantages: Consolidation of configuration under a single service-specific THT variable (MetricsQdrVars and CollectdVars) The ability to switch QDR modes from mesh-mode to edge-only and back Fewer technologies used in the deployment stack, resulting in a simpler debug process To use the Ansible-based deployment, substitute the word "ansible" in place of "puppet" in the resource_registry section of your stf-connectors.yaml file: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-ansible.yaml OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-ansible.yaml To set the configuration, use the new service-specific THT variables, as shown in the following example: parameter_defaults: MetricsQdrVars: tripleo_metrics_qdr_deployment_mode: edge-only CollectdVars: tripleo_collectd_amqp_host: stf.mycluster.example.com The full list of supported configuration parameters can be found in the deployment files referenced above. https://github.com/openstack/tripleo-heat-templates/blob/stable/wallaby/deployment/metrics/qdr-container-ansible.yaml#L172 https://github.com/openstack/tripleo-heat-templates/blob/stable/wallaby/deployment/metrics/collectd-container-ansible.yaml#L307 Additional resources For information about how to validate the deployment, see Section 4.1.6, "Validating client-side installation" . 4.3.6. Querying metrics data from multiple clouds Data stored in Prometheus has a service label according to the Smart Gateway it was scraped from. You can use this label to query data from a specific cloud. To query data from a specific cloud, use a Prometheus promql query that matches the associated service label; for example: collectd_uptime{service="default-cloud1-coll-meter"} . | [
"oc get secrets",
"oc get secret/default-interconnect-selfsigned -o jsonpath='{.data.ca\\.crt}' | base64 -d",
"oc project service-telemetry",
"oc get routes -ogo-template='{{ range .items }}{{printf \"%s\\n\" .spec.host }}{{ end }}' | grep \"\\-5671\" default-interconnect-5671-service-telemetry.apps.infra.watch",
"parameter_defaults: # only send to STF, not other publishers EventPipelinePublishers: [] PipelinePublishers: [] # manage the polling and pipeline configuration files for Ceilometer agents ManagePolling: true ManagePipeline: true # enable Ceilometer metrics and events CeilometerQdrPublishMetrics: true CeilometerQdrPublishEvents: true # enable collection of API status CollectdEnableSensubility: true CollectdSensubilityTransport: amqp1 # enable collection of containerized service metrics CollectdEnableLibpodstats: true # set collectd overrides for higher telemetry resolution and extra plugins # to load CollectdConnectionType: amqp1 CollectdAmqpInterval: 5 CollectdDefaultPollingInterval: 5 CollectdExtraPlugins: - vmem # set standard prefixes for where metrics and events are published to QDR MetricsQdrAddresses: - prefix: 'collectd' distribution: multicast - prefix: 'anycast/ceilometer' distribution: multicast ExtraConfig: ceilometer::agent::polling::polling_interval: 30 ceilometer::agent::polling::polling_meters: - cpu - disk.* - ip.* - image.* - memory - memory.* - network.services.vpn.* - network.services.firewall.* - perf.* - port - port.* - switch - switch.* - storage.* - volume.* # to avoid filling the memory buffers if disconnected from the message bus # note: this may need an adjustment if there are many metrics to be sent. collectd::plugin::amqp1::send_queue_limit: 5000 # receive extra information about virtual memory collectd::plugin::vmem::verbose: true # provide name and uuid in addition to hostname for better correlation # to ceilometer data collectd::plugin::virt::hostname_format: \"name uuid hostname\" # provide the human-friendly name of the virtual instance collectd::plugin::virt::plugin_instance_format: metadata # set memcached collectd plugin to report its metrics by hostname # rather than host IP, ensuring metrics in the dashboard remain uniform collectd::plugin::memcached::instances: local: host: \"%{hiera('fqdn_canonical')}\" port: 11211",
"resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrEventsConfig: driver: amqp topic: cloud1-event CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-notify: notify: true format: JSON presettle: false cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry",
"source ~/stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml -e /home/stack/enable-stf.yaml -e /home/stack/stf-connectors.yaml",
"sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_notification ceilometer_agent_central running running running running",
"sudo podman container inspect --format '{{.State.Status}}' metrics_qdr collectd ceilometer_agent_compute",
"sudo podman exec -it metrics_qdr cat /etc/qpid-dispatch/qdrouterd.conf listener { host: 172.17.1.44 port: 5666 authenticatePeer: no saslMechanisms: ANONYMOUS }",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --connections Connections id host container role dir security authentication tenant ============================================================================================================================================================================================================================================================================================ 1 default-interconnect-5671-service-telemetry.apps.infra.watch:443 default-interconnect-7458fd4d69-bgzfb edge out TLSv1.2(DHE-RSA-AES256-GCM-SHA384) anonymous-user 12 172.17.1.44:60290 openstack.org/om/container/controller-0/ceilometer-agent-notification/25/5c02cee550f143ec9ea030db5cccba14 normal in no-security no-auth 16 172.17.1.44:36408 metrics normal in no-security anonymous-user 899 172.17.1.44:39500 10a2e99d-1b8a-4329-b48c-4335e5f75c84 normal in no-security no-auth",
"sudo podman exec -it metrics_qdr qdstat --bus=172.17.1.44:5666 --links Router Links type dir conn id id peer class addr phs cap pri undel unsett deliv presett psdrop acc rej rel mod delay rate =========================================================================================================================================================== endpoint out 1 5 local _edge 250 0 0 0 2979926 0 0 0 0 2979926 0 0 0 endpoint in 1 6 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 7 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 8 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 1 9 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint out 1 10 250 0 0 0 911 911 0 0 0 0 0 911 0 endpoint in 1 11 250 0 0 0 0 911 0 0 0 0 0 0 0 endpoint out 12 32 local temp.lSY6Mcicol4J2Kp 250 0 0 0 0 0 0 0 0 0 0 0 0 endpoint in 16 41 250 0 0 0 2979924 0 0 0 0 2979924 0 0 0 endpoint in 912 1834 mobile USDmanagement 0 250 0 0 0 1 0 0 1 0 0 0 0 0 endpoint out 912 1835 local temp.9Ok2resI9tmt+CT 250 0 0 0 0 0 0 0 0 0 0 0 0",
"oc get pods -l application=default-interconnect NAME READY STATUS RESTARTS AGE default-interconnect-7458fd4d69-bgzfb 1/1 Running 0 6d21h",
"oc exec -it default-interconnect-7458fd4d69-bgzfb -- qdstat --connections 2020-04-21 18:25:47.243852 UTC default-interconnect-7458fd4d69-bgzfb Connections id host container role dir security authentication tenant last dlv uptime =============================================================================================================================================================================================== 5 10.129.0.110:48498 bridge-3f5 edge in no-security anonymous-user 000:00:00:02 000:17:36:29 6 10.129.0.111:43254 rcv[default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn] edge in no-security anonymous-user 000:00:00:02 000:17:36:20 7 10.130.0.109:50518 rcv[default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd] normal in no-security anonymous-user - 000:17:36:11 8 10.130.0.110:33802 rcv[default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82] normal in no-security anonymous-user 000:01:26:18 000:17:36:05 22 10.128.0.1:51948 Router.ceph-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 23 10.128.0.1:51950 Router.compute-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:03 000:22:08:43 24 10.128.0.1:52082 Router.controller-0.redhat.local edge in TLSv1/SSLv3(DHE-RSA-AES256-GCM-SHA384) anonymous-user 000:00:00:00 000:22:08:34 27 127.0.0.1:42202 c2f541c1-4c97-4b37-a189-a396c08fb079 normal in no-security no-auth 000:00:00:00 000:00:00:00",
"oc exec -it default-interconnect-7458fd4d69-bgzfb -- qdstat --address 2020-04-21 18:20:10.293258 UTC default-interconnect-7458fd4d69-bgzfb Router Addresses class addr phs distrib pri local remote in out thru fallback ========================================================================================================================== mobile anycast/ceilometer/event.sample 0 balanced - 1 0 970 970 0 0 mobile anycast/ceilometer/metering.sample 0 balanced - 1 0 2,344,833 2,344,833 0 0 mobile collectd/notify 0 multicast - 1 0 70 70 0 0 mobile collectd/telemetry 0 multicast - 1 0 216,128,890 216,128,890 0 0",
"source ~/stackrc",
"cat > ~/disable-stf.yaml <<EOF --- resource_registry: OS::TripleO::Services::CeilometerAgentCentral: OS::Heat::None OS::TripleO::Services::CeilometerAgentNotification: OS::Heat::None OS::TripleO::Services::CeilometerAgentIpmi: OS::Heat::None OS::TripleO::Services::ComputeCeilometerAgent: OS::Heat::None OS::TripleO::Services::Redis: OS::Heat::None OS::TripleO::Services::Collectd: OS::Heat::None OS::TripleO::Services::MetricsQdr: OS::Heat::None EOF",
"(undercloud)USD openstack overcloud deploy --templates -e /home/stack/disable-stf.yaml -e [your environment files]",
"oc project service-telemetry",
"oc edit stf default",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: spec: clouds: - name: cloud1 events: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-notify - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-event.sample metrics: collectors: - collectorType: collectd subscriptionAddress: collectd/cloud1-telemetry - collectorType: sensubility subscriptionAddress: sensubility/cloud1-telemetry - collectorType: ceilometer subscriptionAddress: anycast/ceilometer/cloud1-metering.sample - name: cloud2 events:",
"oc get po -l app=smart-gateway NAME READY STATUS RESTARTS AGE default-cloud1-ceil-event-smartgateway-6cfb65478c-g5q82 2/2 Running 0 13h default-cloud1-ceil-meter-smartgateway-58f885c76d-xmxwn 2/2 Running 0 13h default-cloud1-coll-event-smartgateway-58fbbd4485-rl9bd 2/2 Running 0 13h default-cloud1-coll-meter-smartgateway-7c6fc495c4-jn728 2/2 Running 0 13h default-cloud1-sens-meter-smartgateway-8h4tc445a2-mm683 2/2 Running 0 13h",
"apiVersion: infra.watch/v1beta1 kind: ServiceTelemetry metadata: spec: cloudsRemoveOnMissing: true clouds:",
"oc get smartgateways",
"parameter_defaults: CloudDomain: newyork-west-04 CephStorageHostnameFormat: 'ceph-%index%' ObjectStorageHostnameFormat: 'swift-%index%' ComputeHostnameFormat: 'compute-%index%'",
"resource_registry: OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml parameter_defaults: MetricsQdrConnectors: - host: default-interconnect-5671-service-telemetry.apps.infra.watch port: 443 role: edge verifyHostname: false sslProfile: sslProfile MetricsQdrSSLProfiles: - name: sslProfile caCertFileContent: | -----BEGIN CERTIFICATE----- <snip> -----END CERTIFICATE----- CeilometerQdrEventsConfig: driver: amqp topic: cloud1-event CeilometerQdrMetricsConfig: driver: amqp topic: cloud1-metering CollectdAmqpInstances: cloud1-notify: notify: true format: JSON presettle: false cloud1-telemetry: format: JSON presettle: false CollectdSensubilityResultsChannel: sensubility/cloud1-telemetry",
"source stackrc",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/ceilometer-write-qdr.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/metrics/qdr-edge-only.yaml -e /home/stack/hostnames.yaml -e /home/stack/enable-stf.yaml -e /home/stack/stf-connectors.yaml",
"OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-ansible.yaml OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-ansible.yaml",
"parameter_defaults: MetricsQdrVars: tripleo_metrics_qdr_deployment_mode: edge-only CollectdVars: tripleo_collectd_amqp_host: stf.mycluster.example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/assembly-completing-the-stf-configuration_assembly |
Chapter 6. Maintenance procedures | Chapter 6. Maintenance procedures The following sections describe the recommended procedures to perform maintenance on HA cluster setups used for managing HANA Scale-Out System Replication. You must use these procedures independently from each other. Note It is not necessary to put the cluster in maintenance-mode when using these procedures. For more information, refer to When to use "maintenance-mode" in RHEL High Availability Add-on for pacemaker based cluster? . 6.1. Updating the OS and HA cluster components Please refer to Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster , for more information. 6.2. Updating the SAP HANA instances Procedure If the HA cluster configuration described in this document manages the SAP HANA System Replication setup, then you need to execute some additional steps apart from the actual process of updating the SAP HANA instances before and after the update. Execute the following steps: Put the SAPHana resource in unmanaged mode: Update the SAP HANA instances using the procedure that SAP provides. Refresh the status of the SAPHana resource to make sure the cluster is aware of the current state of the SAP HANA System Replication setup when the update of the SAP HANA instances has been completed and it has been verified that SAP HANA System Replication is working again: Put the SAPHana resource back into managed mode so that the HA cluster will be able to react to any issues in the SAP HANA System Replication setup again when the HA cluster correctly picks up the current status of the SAP HANA System Replication setup: 6.3. Moving SAPHana resource to another node (SAP HANA System Replication takeover by HA cluster) manually Move the promotable clone resource to trigger a manual takeover of SAP HANA System Replication: Note pcs-0.10.8-1.el8 or later is required for this command to work correctly. For more information, refer to The pcs resource move command fails for a promotable clone unless "--master" is specified . With each pcs resource move command invocation, the HA cluster creates a location constraint to cause the resource to move. For more information, refer to Is there a way to manage constraints when running pcs resource move? . This constraint must be removed after it has been verified that the SAP HANA System Replication takeover has been completed in order to allow the HA cluster to manage the former primary SAP HANA instance again. To remove the constraint created by pcs resource move , use the following command: Note What happens to the former SAP HANA primary instance after the takeover has been completed and the constraint has been removed depends on the setting of the AUTOMATED_REGISTER parameter of the SAPHana resource: If Automated_REGISTER=true , then the former SAP HANA primary instance is registered as the new secondary, and SAP HANA System Replication becomes active again. If AUTOMATED_REGISTER=false , then it is up to the operator to decide what should happen with the former SAP HANA primary instance after the takeover. | [
"pcs resource unmanage SAPHana_RH1_HDB10-clone",
"pcs resource refresh SAPHana_RH1_HDB10-clone",
"pcs resource manage SAPHana_RH1_HDB10-clone",
"pcs resource move SAPHana_RH1_HDB10-clone",
"pcs resource clear SAPHana_RH1_HDB10-clone"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-out_system_replication_using_the_rhel_ha_add-on/asmb_mproc_automating-sap-hana-scale-out-v9 |
Chapter 8. Installing a cluster on VMC in a restricted network with user-provisioned infrastructure | Chapter 8. Installing a cluster on VMC in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.12, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 8.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 8.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 8.2. vSphere prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned block registry storage . For more information on persistent storage, see Understanding persistent storage . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 8.3. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 8.3.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 8.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 8.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 8.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 8.6. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 8.7. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 8.7.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 8.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 8.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 8.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 8.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 8.7.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 8.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 8.7.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.7.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 8.7.5. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 8.7.5.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 8.7.5.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 8.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 8.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 8.7.6. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 8.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 8.7.6.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 8.4. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 8.5. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 8.7.7. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 8.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 8.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 8.7.7.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 8.6. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 8.8. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 8.9. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 8.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 8.11. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 8.12. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 8.12. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 8.12.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 12 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 13 diskType: thin 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, ( - ), and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 The fully-qualified hostname or IP address of the vCenter server. Important The Cluster Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 8 The name of the user for accessing the server. 9 The password associated with the vSphere user. 10 The vSphere datacenter. 11 The default vSphere datastore to use. 12 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 13 Optional parameter: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 8.12.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.12.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 8.13. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 8.14. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 8.15. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 8.16. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 8.17. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 8.18. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 8.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 8.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 8.21.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 8.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 8.21.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 8.21.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 8.21.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring registry storage for VMware vSphere . 8.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 8.23. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 8.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 0 3 controlPlane: 4 name: master replicas: 3 5 metadata: name: test 6 platform: vsphere: vcenter: your.vcenter.server 7 username: username 8 password: password 9 datacenter: datacenter 10 defaultDatastore: datastore 11 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 12 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 13 diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1",
"apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vmc/installing-restricted-networks-vmc-user-infra |
A.2. Strategies for Disk Repartitioning | A.2. Strategies for Disk Repartitioning There are several different ways that a disk can be repartitioned. This section discusses the following possible approaches: Unpartitioned free space is available An unused partition is available Free space in an actively used partition is available Note that this section discusses the aforementioned concepts only theoretically and it does not include any procedures showing how to perform disk repartitioning step-by-step. Such detailed information are beyond the scope of this document. Note Keep in mind that the following illustrations are simplified in the interest of clarity and do not reflect the exact partition layout that you encounter when actually installing Red Hat Enterprise Linux. A.2.1. Using Unpartitioned Free Space In this situation, the partitions already defined do not span the entire hard disk, leaving unallocated space that is not part of any defined partition. The following diagram shows what this might look like: Figure A.8. Disk Drive with Unpartitioned Free Space In the above example, 1 represents an undefined partition with unallocated space and 2 represents a defined partition with allocated space. An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition. In any case, you can create the necessary partitions from the unused space. Unfortunately, this scenario, although very simple, is not very likely (unless you have just purchased a new disk just for Red Hat Enterprise Linux). Most pre-installed operating systems are configured to take up all available space on a disk drive (see Section A.2.3, "Using Free Space from an Active Partition" ). A.2.2. Using Space from an Unused Partition In this case, maybe you have one or more partitions that you do not use any longer. The following diagram illustrates such a situation. Figure A.9. Disk Drive with an Unused Partition In the above example, 1 represents an unused partition and 2 represents reallocating an unused partition for Linux. In this situation, you can use the space allocated to the unused partition. You first must delete the partition and then create the appropriate Linux partition(s) in its place. You can delete the unused partition and manually create new partitions during the installation process. A.2.3. Using Free Space from an Active Partition This is the most common situation. It is also, unfortunately, the hardest to handle. The main problem is that, even if you have enough free space, it is presently allocated to a partition that is already in use. If you purchased a computer with pre-installed software, the hard disk most likely has one massive partition holding the operating system and data. Aside from adding a new hard drive to your system, you have two choices: Destructive Repartitioning In this case, the single large partition is deleted and several smaller ones are created instead. Any data held in the original partition is destroyed. This means that making a complete backup is necessary. It is highly recommended to make two backups, use verification (if available in your backup software), and try to read data from the backup before deleting the partition. Warning If an operating system was installed on that partition, it must be reinstalled if you want to use that system as well. Be aware that some computers sold with pre-installed operating systems might not include the installation media to reinstall the original operating system. You should check whether this applies to your system is before you destroy your original partition and its operating system installation. After creating a smaller partition for your existing operating system, you can reinstall software, restore your data, and start your Red Hat Enterprise Linux installation. Figure A.10. Disk Drive Being Destructively Repartitioned In the above example, 1 represents before and 2 represents after. Warning Any data previously present in the original partition is lost. Non-Destructive Repartitioning With non-destructive repartitioning you execute a program that makes a big partition smaller without losing any of the files stored in that partition. This method is usually reliable, but can be very time-consuming on large drives. While the process of non-destructive repartitioning is rather straightforward, there are three steps involved: Compress and backup existing data Resize the existing partition Create new partition(s) Each step is described further in more detail. A.2.3.1. Compress Existing Data As the following figure shows, the first step is to compress the data in your existing partition. The reason for doing this is to rearrange the data such that it maximizes the available free space at the "end" of the partition. Figure A.11. Disk Drive Being Compressed In the above example, 1 represents before and 2 represents after. This step is crucial. Without it, the location of the data could prevent the partition from being resized to the extent desired. Note also that, for one reason or another, some data cannot be moved. If this is the case (and it severely restricts the size of your new partitions), you might be forced to destructively repartition your disk. A.2.3.2. Resize the Existing Partition Figure A.12, "Disk Drive with Partition Resized" shows the actual resizing process. While the actual result of the resizing operation varies depending on the software used, in most cases the newly freed space is used to create an unformatted partition of the same type as the original partition. Figure A.12. Disk Drive with Partition Resized In the above example, 1 represents before and 2 represents after. It is important to understand what the resizing software you use does with the newly freed space, so that you can take the appropriate steps. In the case illustrated here, it would be best to delete the new DOS partition and create the appropriate Linux partition(s). A.2.3.3. Create new partitions As the step implied, it might or might not be necessary to create new partitions. However, unless your resizing software supports systems with Linux installed, it is likely that you must delete the partition that was created during the resizing process. Figure A.13. Disk Drive with Final Partition Configuration In the above example, 1 represents before and 2 represents after. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-disk-partitions-making-room |
Chapter 1. Notification of name change to Streams for Apache Kafka | Chapter 1. Notification of name change to Streams for Apache Kafka AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat's product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_openshift/ref-name-change-str |
Chapter 12. Blocking and allowing applications by using fapolicyd | Chapter 12. Blocking and allowing applications by using fapolicyd Setting and enforcing a policy that either allows or denies application execution based on a rule set efficiently prevents the execution of unknown and potentially malicious software. 12.1. Introduction to fapolicyd The fapolicyd software framework controls the execution of applications based on a user-defined policy. This is one of the most efficient ways to prevent running untrusted and possibly malicious applications on the system. The fapolicyd framework provides the following components: fapolicyd service fapolicyd command-line utilities fapolicyd RPM plugin fapolicyd rule language fagenrules script The administrator can define the allow and deny execution rules for any application with the possibility of auditing based on a path, hash, MIME type, or trust. The fapolicyd framework introduces the concept of trust. An application is trusted when it is properly installed by the system package manager, and therefore it is registered in the system RPM database. The fapolicyd daemon uses the RPM database as a list of trusted binaries and scripts. The fapolicyd RPM plugin registers any system update that is handled by either the DNF package manager or the RPM Package Manager. The plugin notifies the fapolicyd daemon about changes in this database. Other ways of adding applications require the creation of custom rules and restarting the fapolicyd service. The fapolicyd service configuration is located in the /etc/fapolicyd/ directory with the following structure: The /etc/fapolicyd/fapolicyd.trust file contains a list of trusted files. You can also use multiple trust files in the /etc/fapolicyd/trust.d/ directory. The /etc/fapolicyd/rules.d/ directory for files containing allow and deny execution rules. The fagenrules script merges these component rules files to the /etc/fapolicyd/compiled.rules file. The fapolicyd.conf file contains the daemon's configuration options. This file is useful primarily for performance-tuning purposes. Rules in /etc/fapolicyd/rules.d/ are organized in several files, each representing a different policy goal. The numbers at the beginning of the corresponding file names determine the order in /etc/fapolicyd/compiled.rules : 10 Language rules. 20 Dracut-related Rules. 21 rules for updaters. 30 Patterns. 40 ELF rules. 41 Shared objects rules. 42 Trusted ELF rules. 70 Trusted language rules. 72 Shell rules. 90 Deny execute rules. 95 Allow open rules. You can use one of the following ways for fapolicyd integrity checking: File-size checking Comparing SHA-256 hashes Integrity Measurement Architecture (IMA) subsystem By default, fapolicyd does no integrity checking. Integrity checking based on the file size is fast, but an attacker can replace the content of the file and preserve its byte size. Computing and checking SHA-256 checksums is more secure, but it affects the performance of the system. The integrity = ima option in fapolicyd.conf requires support for files extended attributes (also known as xattr ) on all file systems containing executable files. Additional resources fapolicyd(8) , fapolicyd.rules(5) , fapolicyd.conf(5) , fapolicyd.trust(13) , fagenrules(8) , and fapolicyd-cli(1) man pages. The Enhancing security with the kernel integrity subsystem chapter in the Managing, monitoring, and updating the kernel document. The documentation installed with the fapolicyd package in the /usr/share/doc/fapolicyd/ directory and the /usr/share/fapolicyd/sample-rules/README-rules file. 12.2. Deploying fapolicyd When deploying the fapolicyd application allowlisting framework, you can either try your configuration in permissive mode first or directly enable the service in the default configuration. Procedure Install the fapolicyd package: Optional: To try your configuration first, change mode to permissive. Open the /etc/fapolicyd/fapolicyd.conf file in a text editor of your choice, for example: Change the value of the permissive option from 0 to 1 , save the file, and exit the editor: Alternatively, you can debug your configuration by using the fapolicyd --debug-deny --permissive command before you start the service. See the Troubleshooting problems related to fapolicyd section for more information. Enable and start the fapolicyd service: If you enabled permissive mode through /etc/fapolicyd/fapolicyd.conf : Set the Audit service for recording fapolicyd events: Use your applications. Check Audit logs for fanotify denials, for example: When debugged, disable permissive mode by changing the corresponding value back to permissive = 0 , and restart the service: Verification Verify that the fapolicyd service is running correctly: Log in as a user without root privileges, and check that fapolicyd is working, for example: 12.3. Marking files as trusted using an additional source of trust The fapolicyd framework trusts files contained in the RPM database. You can mark additional files as trusted by adding the corresponding entries to the /etc/fapolicyd/fapolicyd.trust plain-text file or the /etc/fapolicyd/trust.d/ directory, which supports separating a list of trusted files into more files. You can modify fapolicyd.trust or the files in /etc/fapolicyd/trust.d either directly using a text editor or through fapolicyd-cli commands. Note Marking files as trusted using fapolicyd.trust or trust.d/ is better than writing custom fapolicyd rules due to performance reasons. Prerequisites The fapolicyd framework is deployed on your system. Procedure Copy your custom binary to the required directory, for example: Mark your custom binary as trusted, and store the corresponding entry to the myapp file in /etc/fapolicyd/trust.d/ : If you skip the --trust-file option, then the command adds the corresponding line to /etc/fapolicyd/fapolicyd.trust . To mark all existing files in a directory as trusted, provide the directory path as an argument of the --file option, for example: fapolicyd-cli --file add /tmp/my_bin_dir/ --trust-file myapp . Update the fapolicyd database: Note Changing the content of a trusted file or directory changes their checksum, and therefore fapolicyd no longer considers them trusted. To make the new content trusted again, refresh the file trust database by using the fapolicyd-cli --file update command. If you do not provide any argument, the entire database refreshes. Alternatively, you can specify a path to a specific file or directory. Then, update the database by using fapolicyd-cli --update . Verification Check that your custom binary can be now executed, for example: Additional resources fapolicyd.trust(13) man page on your system 12.4. Adding custom allow and deny rules for fapolicyd The default set of rules in the fapolicyd package does not affect system functions. For custom scenarios, such as storing binaries and scripts in a non-standard directory or adding applications without the dnf or rpm installers, you must either mark additional files as trusted or add new custom rules. For basic scenarios, prefer Marking files as trusted using an additional source of trust . In more advanced scenarios such as allowing to execute a custom binary only for specific user and group identifiers, add new custom rules to the /etc/fapolicyd/rules.d/ directory. The following steps demonstrate adding a new rule to allow a custom binary. Prerequisites The fapolicyd framework is deployed on your system. Procedure Copy your custom binary to the required directory, for example: Stop the fapolicyd service: Use debug mode to identify a corresponding rule. Because the output of the fapolicyd --debug command is verbose and you can stop it only by pressing Ctrl + C or killing the corresponding process, redirect the error output to a file. In this case, you can limit the output only to access denials by using the --debug-deny option instead of --debug : Alternatively, you can run fapolicyd debug mode in another terminal. Repeat the command that fapolicyd denied: Stop debug mode by resuming it in the foreground and pressing Ctrl + C : Alternatively, kill the process of fapolicyd debug mode: Find a rule that denies the execution of your application: Locate the file that contains a rule that prevented the execution of your custom binary. In this case, the deny_audit perm=execute rule belongs to the 90-deny-execute.rules file: Add a new allow rule to the file that lexically precedes the rule file that contains the rule that denied the execution of your custom binary in the /etc/fapolicyd/rules.d/ directory: Insert the following rule to the 80-myapps.rules file: Alternatively, you can allow executions of all binaries in the /tmp directory by adding the following rule to the rule file in /etc/fapolicyd/rules.d/ : Important To make a rule effective recursively on all directories under the specified directory, add a trailing slash to the value of the dir= parameter in the rule ( /tmp/ in the example). To prevent changes in the content of your custom binary, define the required rule using an SHA-256 checksum: Change the rule to the following definition: Check that the list of compiled differs from the rule set in /etc/fapolicyd/rules.d/ , and update the list, which is stored in the /etc/fapolicyd/compiled.rules file: Check that your custom rule is in the list of fapolicyd rules before the rule that prevented the execution: Start the fapolicyd service: Verification Check that your custom binary can be now executed, for example: Additional resources fapolicyd.rules(5) and fapolicyd-cli(1) man pages on your system The documentation installed with the fapolicyd package in the /usr/share/fapolicyd/sample-rules/README-rules file. 12.5. Enabling fapolicyd integrity checks By default, fapolicyd does not perform integrity checking. You can configure fapolicyd to perform integrity checks by comparing either file sizes or SHA-256 hashes. You can also set integrity checks by using the Integrity Measurement Architecture (IMA) subsystem. Prerequisites The fapolicyd framework is deployed on your system. Procedure Open the /etc/fapolicyd/fapolicyd.conf file in a text editor of your choice, for example: Change the value of the integrity option from none to sha256 , save the file, and exit the editor: Restart the fapolicyd service: Verification Back up the file used for the verification: Change the content of the /bin/more binary: Use the changed binary as a regular user: Revert the changes: 12.6. Troubleshooting problems related to fapolicyd The fapolicyd application framework provides tools for troubleshooting the most common problems and you can also add applications installed with the rpm command to the trust database. Installing applications by using rpm If you install an application by using the rpm command, you have to perform a manual refresh of the fapolicyd RPM database: Install your application : Refresh the database: If you skip this step, the system can freeze and must be restarted. Service status If fapolicyd does not work correctly, check the service status: fapolicyd-cli checks and listings The --check-config , --check-watch_fs , and --check-trustdb options help you find syntax errors, not-yet-watched file systems, and file mismatches, for example: Use the --list option to check the current list of rules and their order: Debug mode Debug mode provides detailed information about matched rules, database status, and more. To switch fapolicyd to debug mode: Stop the fapolicyd service: Use debug mode to identify a corresponding rule: Because the output of the fapolicyd --debug command is verbose, you can redirect the error output to a file: Alternatively, to limit the output only to entries when fapolicyd denies access, use the --debug-deny option: You can also use permissive mode, which does not prevent running your application but only records the matched fapolicyd rule: Removing the fapolicyd database To solve problems related to the fapolicyd database, try to remove the database file: Warning Do not remove the /var/lib/fapolicyd/ directory. The fapolicyd framework automatically restores only the database file in this directory. Dumping the fapolicyd database The fapolicyd contains entries from all enabled trust sources. You can check the entries after dumping the database: Application pipe In rare cases, removing the fapolicyd pipe file can solve a lockup: Additional resources fapolicyd-cli(1) man page on your system 12.7. Preventing users from executing untrustworthy code by using the fapolicyd RHEL system role You can automate the installation and configuration of the fapolicyd service by using the fapolicyd RHEL system role. With this role, you can remotely configure the service to allow users to execute only trusted applications, for example, the ones which are listed in the RPM database and in an allow list. Additionally, the service can perform integrity checks before it executes an allowed application. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configuring fapolicyd hosts: managed-node-01.example.com tasks: - name: Allow only executables installed from RPM database and specific files ansible.builtin.include_role: name: rhel-system-roles.fapolicyd vars: fapolicyd_setup_permissive: false fapolicyd_setup_integrity: sha256 fapolicyd_setup_trust: rpmdb,file fapolicyd_add_trusted_file: - <path_to_allowed_command> - <path_to_allowed_service> The settings specified in the example playbook include the following: fapolicyd_setup_permissive: <true|false> Enables or disables sending policy decisions to the kernel for enforcement. Set this variable for debugging and testing purposes to false . fapolicyd_setup_integrity: <type_type> Defines the integrity checking method. You can set one of the following values: none (default): Disables integrity checking. size : The service compares only the file sizes of allowed applications. ima : The service checks the SHA-256 hash that the kernel's Integrity Measurement Architecture (IMA) stored in a file's extended attribute. Additionally, the service performs a size check. Note that the role does not configure the IMA kernel subsystem. To use this option, you must manually configure the IMA subsystem. sha256 : The service compares the SHA-256 hash of allowed applications. fapolicyd_setup_trust: <trust_backends> Defines the list of trust backends. If you include the file backend, specify the allowed executable files in the fapolicyd_add_trusted_file list. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.fapolicyd.README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Execute a binary application that is not on the allow list as a user: Additional resources /usr/share/ansible/roles/rhel-system-roles.fapolicyd/README.md file /usr/share/doc/rhel-system-roles/fapolicyd/ directory 12.8. Additional resources fapolicyd -related man pages listed by using the man -k fapolicyd command on your system FOSDEM 2020 fapolicyd presentation | [
"dnf install fapolicyd",
"vi /etc/fapolicyd/fapolicyd.conf",
"permissive = 1",
"systemctl enable --now fapolicyd",
"auditctl -w /etc/fapolicyd/ -p wa -k fapolicyd_changes service try-restart auditd",
"ausearch -ts recent -m fanotify",
"systemctl restart fapolicyd",
"systemctl status fapolicyd ● fapolicyd.service - File Access Policy Daemon Loaded: loaded (/usr/lib/systemd/system/fapolicyd.service; enabled; preset: disabled) Active: active (running) since Tue 2024-10-08 05:53:50 EDT; 11s ago ... Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from rpmdb backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from file backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Starting to listen for events",
"cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted",
"cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted",
"fapolicyd-cli --file add /tmp/ls --trust-file myapp",
"fapolicyd-cli --update",
"/tmp/ls ls",
"cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted",
"systemctl stop fapolicyd",
"fapolicyd --debug-deny 2> fapolicy.output & [1] 51341",
"/tmp/ls bash: /tmp/ls: Operation not permitted",
"fg fapolicyd --debug 2> fapolicy.output ^C",
"kill 51341",
"cat fapolicy.output | grep 'deny_audit' rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0",
"ls /etc/fapolicyd/rules.d/ 10-languages.rules 40-bad-elf.rules 72-shell.rules 20-dracut.rules 41-shared-obj.rules 90-deny-execute.rules 21-updaters.rules 42-trusted-elf.rules 95-allow-open.rules 30-patterns.rules 70-trusted-lang.rules cat /etc/fapolicyd/rules.d/90-deny-execute.rules Deny execution for anything untrusted deny_audit perm=execute all : all",
"touch /etc/fapolicyd/rules.d/80-myapps.rules vi /etc/fapolicyd/rules.d/80-myapps.rules",
"allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0",
"allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0",
"sha256sum /tmp/ls 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836 ls",
"allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash= 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836",
"fagenrules --check /usr/sbin/fagenrules: Rules have changed and should be updated fagenrules --load",
"fapolicyd-cli --list 13. allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0 14. deny_audit perm=execute all : all",
"systemctl start fapolicyd",
"/tmp/ls ls",
"vi /etc/fapolicyd/fapolicyd.conf",
"integrity = sha256",
"systemctl restart fapolicyd",
"cp /bin/more /bin/more.bak",
"cat /bin/less > /bin/more",
"su example.user /bin/more /etc/redhat-release bash: /bin/more: Operation not permitted",
"mv -f /bin/more.bak /bin/more",
"rpm -i application .rpm",
"fapolicyd-cli --update",
"systemctl status fapolicyd",
"fapolicyd-cli --check-config Daemon config is OK fapolicyd-cli --check-trustdb /etc/selinux/targeted/contexts/files/file_contexts miscompares: size sha256 /etc/selinux/targeted/policy/policy.31 miscompares: size sha256",
"fapolicyd-cli --list 9. allow perm=execute all : trust=1 10. allow perm=open all : ftype=%languages trust=1 11. deny_audit perm=any all : ftype=%languages 12. allow perm=any all : ftype=text/x-shellscript 13. deny_audit perm=execute all : all",
"systemctl stop fapolicyd",
"fapolicyd --debug",
"fapolicyd --debug 2> fapolicy.output",
"fapolicyd --debug-deny",
"fapolicyd --debug-deny --permissive",
"systemctl stop fapolicyd fapolicyd-cli --delete-db",
"fapolicyd-cli --dump-db",
"rm -f /var/run/fapolicyd/fapolicyd.fifo",
"--- - name: Configuring fapolicyd hosts: managed-node-01.example.com tasks: - name: Allow only executables installed from RPM database and specific files ansible.builtin.include_role: name: rhel-system-roles.fapolicyd vars: fapolicyd_setup_permissive: false fapolicyd_setup_integrity: sha256 fapolicyd_setup_trust: rpmdb,file fapolicyd_add_trusted_file: - <path_to_allowed_command> - <path_to_allowed_service>",
"ansible-playbook ~/playbook.yml --syntax-check",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'su -c \"/bin/not_authorized_application \" <user_name> ' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return code"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/security_hardening/assembly_blocking-and-allowing-applications-using-fapolicyd_security-hardening |
Chapter 3. Installing the Cluster Observability Operator | Chapter 3. Installing the Cluster Observability Operator As a cluster administrator, you can install or remove the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. 3.1. Installing the Cluster Observability Operator in the web console Install the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Type cluster observability operator in the Filter by keyword box. Click Cluster Observability Operator in the list of results. Read the information about the Operator, and configure the following installation settings: Update channel stable Version 1.0.0 or later Installation mode All namespaces on the cluster (default) Installed Namespace Operator recommended Namespace: openshift-cluster-observability-operator Select Enable Operator recommended cluster monitoring on this Namespace Update approval Automatic Optional: You can change the installation settings to suit your requirements. For example, you can select to subscribe to a different update channel, to install an older released version of the Operator, or to require manual approval for updates to new versions of the Operator. Click Install . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry appears in the list. Additional resources Adding Operators to a cluster 3.2. Uninstalling the Cluster Observability Operator using the web console If you have installed the Cluster Observability Operator (COO) by using OperatorHub, you can uninstall it in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure Go to Operators Installed Operators . Locate the Cluster Observability Operator entry in the list. Click for this entry and select Uninstall Operator . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry no longer appears in the list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cluster_observability_operator/installing-cluster-observability-operators |
Chapter 10. Namespace [v1] | Chapter 10. Namespace [v1] Description Namespace provides a scope for Names. Use of multiple namespaces is optional. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NamespaceSpec describes the attributes on a Namespace. status object NamespaceStatus is information about the current status of a Namespace. 10.1.1. .spec Description NamespaceSpec describes the attributes on a Namespace. Type object Property Type Description finalizers array (string) Finalizers is an opaque list of values that must be empty to permanently remove object from storage. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ 10.1.2. .status Description NamespaceStatus is information about the current status of a Namespace. Type object Property Type Description conditions array Represents the latest available observations of a namespace's current state. conditions[] object NamespaceCondition contains details about state of namespace. phase string Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ Possible enum values: - "Active" means the namespace is available for use in the system - "Terminating" means the namespace is undergoing graceful termination 10.1.3. .status.conditions Description Represents the latest available observations of a namespace's current state. Type array 10.1.4. .status.conditions[] Description NamespaceCondition contains details about state of namespace. Type object Required type status Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 10.2. API endpoints The following API endpoints are available: /api/v1/namespaces GET : list or watch objects of kind Namespace POST : create a Namespace /api/v1/watch/namespaces GET : watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{name} DELETE : delete a Namespace GET : read the specified Namespace PATCH : partially update the specified Namespace PUT : replace the specified Namespace /api/v1/watch/namespaces/{name} GET : watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/namespaces/{name}/status GET : read status of the specified Namespace PATCH : partially update status of the specified Namespace PUT : replace status of the specified Namespace /api/v1/namespaces/{name}/finalize PUT : replace finalize of the specified Namespace 10.2.1. /api/v1/namespaces HTTP method GET Description list or watch objects of kind Namespace Table 10.1. HTTP responses HTTP code Reponse body 200 - OK NamespaceList schema 401 - Unauthorized Empty HTTP method POST Description create a Namespace Table 10.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.3. Body parameters Parameter Type Description body Namespace schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 202 - Accepted Namespace schema 401 - Unauthorized Empty 10.2.2. /api/v1/watch/namespaces HTTP method GET Description watch individual changes to a list of Namespace. deprecated: use the 'watch' parameter with a list operation instead. Table 10.5. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /api/v1/namespaces/{name} Table 10.6. Global path parameters Parameter Type Description name string name of the Namespace HTTP method DELETE Description delete a Namespace Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Namespace Table 10.9. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Namespace Table 10.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Namespace Table 10.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.13. Body parameters Parameter Type Description body Namespace schema Table 10.14. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.4. /api/v1/watch/namespaces/{name} Table 10.15. Global path parameters Parameter Type Description name string name of the Namespace HTTP method GET Description watch changes to an object of kind Namespace. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.16. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.5. /api/v1/namespaces/{name}/status Table 10.17. Global path parameters Parameter Type Description name string name of the Namespace HTTP method GET Description read status of the specified Namespace Table 10.18. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Namespace Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Namespace Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. Body parameters Parameter Type Description body Namespace schema Table 10.23. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty 10.2.6. /api/v1/namespaces/{name}/finalize Table 10.24. Global path parameters Parameter Type Description name string name of the Namespace Table 10.25. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method PUT Description replace finalize of the specified Namespace Table 10.26. Body parameters Parameter Type Description body Namespace schema Table 10.27. HTTP responses HTTP code Reponse body 200 - OK Namespace schema 201 - Created Namespace schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/metadata_apis/namespace-v1 |
Introduction | Introduction This document provides a high-level overview of the High Availability Add-On for Red Hat Enterprise Linux 6. Although the information in this document is an overview, you should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of server computing to gain a good comprehension of the information. For more information about using Red Hat Enterprise Linux see the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation of Red Hat Enterprise Linux 6. Red Hat Enterprise Linux Deployment Guide - Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 6. For more information about this and related products for Red Hat Enterprise Linux 6, see the following resources: Configuring and Managing the High Availability Add-On Provides information about configuring and managing the High Availability Add-On (also known as Red Hat Cluster) for Red Hat Enterprise Linux 6. Logical Volume Manager Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Global File System 2: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2), which is included in the Resilient Storage Add-On. DM Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 6. Load Balancer Administration - Provides information on configuring high-performance systems and services with the Red Hat Load Balancer Add-On (Formerly known as Linux Virtual Server [LVS]). Release Notes - Provides information about the current release of Red Hat products. Note For information on best practices for deploying and upgrading Red Hat Enterprise Linux clusters using the High Availability Add-On and Red Hat Global File System 2 (GFS2) see the article "Red Hat Enterprise Linux Cluster, High Availability, and GFS Deployment Best Practices" on Red Hat Customer Portal at . https://access.redhat.com/kb/docs/DOC-40821 . This document and other Red Hat documents are available in HTML, PDF, and RPM versions online at https://access.redhat.com/documentation/en/red-hat-enterprise-linux/ . 1. We Need Feedback! If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6 , the component doc-High_Availability_Add-On_Overview and version number: 6.9 . If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/ch-intro-cso |
Chapter 28. HTTP | Chapter 28. HTTP Only producer is supported The HTTP component provides HTTP based endpoints for calling external HTTP resources (as a client to call external servers using HTTP). Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-http</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 28.1. URI format Will by default use port 80 for HTTP and 443 for HTTPS. 28.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 28.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 28.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 28.3. Component Options The HTTP component supports 37 options, which are listed below. Name Description Default Type cookieStore (producer) To use a custom org.apache.http.client.CookieStore. By default the org.apache.http.impl.client.BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn't be stored as we are just bridging (eg acting as a proxy). CookieStore copyHeaders (producer) If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean responsePayloadStreamingThreshold (producer) This threshold in bytes controls whether the response payload should be stored in memory as a byte array or be streaming based. Set this to -1 to always use streaming mode. 8192 int skipRequestHeaders (producer (advanced)) Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false boolean skipResponseHeaders (producer (advanced)) Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false boolean allowJavaSerializedObject (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean authCachingDisabled (advanced) Disables authentication scheme caching. false boolean automaticRetriesDisabled (advanced) Disables automatic request recovery and re-execution. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean clientConnectionManager (advanced) To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component. HttpClientConnectionManager connectionsPerRoute (advanced) The maximum number of connections per route. 20 int connectionStateDisabled (advanced) Disables connection state tracking. false boolean connectionTimeToLive (advanced) The time for connection to live, the time unit is millisecond, the default value is always keep alive. long contentCompressionDisabled (advanced) Disables automatic content decompression. false boolean cookieManagementDisabled (advanced) Disables state (cookie) management. false boolean defaultUserAgentDisabled (advanced) Disables the default user agent set by this builder if none has been provided by the user. false boolean httpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding httpClientConfigurer (advanced) To use the custom HttpClientConfigurer to perform configuration of the HttpClient that will be used. HttpClientConfigurer httpConfiguration (advanced) To use the shared HttpConfiguration as base configuration. HttpConfiguration httpContext (advanced) To use a custom org.apache.http.protocol.HttpContext when executing requests. HttpContext maxTotalConnections (advanced) The maximum number of connections. 200 int redirectHandlingDisabled (advanced) Disables automatic redirect handling. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy proxyAuthDomain (proxy) Proxy authentication domain to use. String proxyAuthHost (proxy) Proxy authentication host. String proxyAuthMethod (proxy) Proxy authentication method to use. Enum values: Basic Digest NTLM String proxyAuthNtHost (proxy) Proxy authentication domain (workstation name) to use with NTML. String proxyAuthPassword (proxy) Proxy authentication password. String proxyAuthPort (proxy) Proxy authentication port. Integer proxyAuthUsername (proxy) Proxy authentication username. String sslContextParameters (security) To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.support.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. SSLContextParameters useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean x509HostnameVerifier (security) To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. HostnameVerifier connectionRequestTimeout (timeout) The timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 int connectTimeout (timeout) Determines the timeout in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 int socketTimeout (timeout) Defines the socket timeout in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 int 28.4. Endpoint Options The HTTP endpoint is configured using URI syntax: with the following path and query parameters: 28.4.1. Path Parameters (1 parameters) Name Description Default Type httpUri (common) Required The url of the HTTP endpoint to call. URI 28.4.2. Query Parameters (51 parameters) Name Description Default Type chunked (producer) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response. true boolean disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy httpBinding (common (advanced)) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding bridgeEndpoint (producer) If the option is true, HttpProducer will ignore the Exchange.HTTP_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back. false boolean clearExpiredCookies (producer) Whether to clear expired cookies before sending the HTTP request. This ensures the cookies store does not keep growing by adding new cookies which is newer removed when they are expired. If the component has disabled cookie management then this option is disabled too. true boolean connectionClose (producer) Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false. false boolean copyHeaders (producer) If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). true boolean customHostHeader (producer) To use custom host header for producer. When not set in query will be ignored. When set will override host header derived from url. String httpMethod (producer) Configure the HTTP method to use. The HttpMethod header cannot override this option if set. Enum values: GET POST PUT DELETE HEAD OPTIONS TRACE PATCH HttpMethods ignoreResponseBody (producer) If this option is true, The http producer won't read response body and cache the input stream. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveHostHeader (producer) If the option is true, HttpProducer will set the Host header to the value contained in the current exchange Host header, useful in reverse proxy applications where you want the Host header received by the downstream server to reflect the URL called by the upstream client, this allows applications which use the Host header to generate accurate URL's for a proxied service. false boolean throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean transferException (producer) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean cookieHandler (producer (advanced)) Configure a cookie handler to maintain a HTTP session. CookieHandler cookieStore (producer (advanced)) To use a custom CookieStore. By default the BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn't be stored as we are just bridging (eg acting as a proxy). If a cookieHandler is set then the cookie store is also forced to be a noop cookie store as cookie handling is then performed by the cookieHandler. CookieStore deleteWithBody (producer (advanced)) Whether the HTTP DELETE should include the message body or not. By default HTTP DELETE do not include any HTTP body. However in some rare cases users may need to be able to include the message body. false boolean getWithBody (producer (advanced)) Whether the HTTP GET should include the message body or not. By default HTTP GET do not include any HTTP body. However in some rare cases users may need to be able to include the message body. false boolean okStatusCodeRange (producer (advanced)) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. 200-299 String skipRequestHeaders (producer (advanced)) Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false boolean skipResponseHeaders (producer (advanced)) Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false boolean userAgent (producer (advanced)) To set a custom HTTP User-Agent request header. String clientBuilder (advanced) Provide access to the http client request parameters used on new RequestConfig instances used by producers or consumers of this endpoint. HttpClientBuilder clientConnectionManager (advanced) To use a custom HttpClientConnectionManager to manage connections. HttpClientConnectionManager connectionsPerRoute (advanced) The maximum number of connections per route. 20 int httpClient (advanced) Sets a custom HttpClient to be used by the producer. HttpClient httpClientConfigurer (advanced) Register a custom configuration strategy for new HttpClient instances created by producers or consumers such as to configure authentication mechanisms etc. HttpClientConfigurer httpClientOptions (advanced) To configure the HttpClient using the key/values from the Map. Map httpContext (advanced) To use a custom HttpContext instance. HttpContext maxTotalConnections (advanced) The maximum number of connections. 200 int useSystemProperties (advanced) To use System Properties as fallback for configuration. false boolean proxyAuthDomain (proxy) Proxy authentication domain to use with NTML. String proxyAuthHost (proxy) Proxy authentication host. String proxyAuthMethod (proxy) Proxy authentication method to use. Enum values: Basic Digest NTLM String proxyAuthNtHost (proxy) Proxy authentication domain (workstation name) to use with NTML. String proxyAuthPassword (proxy) Proxy authentication password. String proxyAuthPort (proxy) Proxy authentication port. int proxyAuthScheme (proxy) Proxy authentication scheme to use. Enum values: http https String proxyAuthUsername (proxy) Proxy authentication username. String proxyHost (proxy) Proxy hostname to use. String proxyPort (proxy) Proxy port to use. int authDomain (security) Authentication domain to use with NTML. String authenticationPreemptive (security) If this option is true, camel-http sends preemptive basic authentication to the server. false boolean authHost (security) Authentication host to use with NTML. String authMethod (security) Authentication methods allowed to use as a comma separated list of values Basic, Digest or NTLM. String authMethodPriority (security) Which authentication method to prioritize to use, either as Basic, Digest or NTLM. Enum values: Basic Digest NTLM String authPassword (security) Authentication password. String authUsername (security) Authentication username. String sslContextParameters (security) To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.util.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. SSLContextParameters x509HostnameVerifier (security) To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. HostnameVerifier 28.5. Message Headers Name Type Description Exchange.HTTP_URI String URI to call. Will override existing URI set directly on the endpoint. This uri is the uri of the http server to call. Its not the same as the Camel endpoint uri, where you can configure endpoint options such as security etc. This header does not support that, its only the uri of the http server. Exchange.HTTP_PATH String Request URI's path, the header will be used to build the request URI with the HTTP_URI. Exchange.HTTP_QUERY String URI parameters. Will override existing URI parameters set directly on the endpoint. Exchange.HTTP_RESPONSE_CODE int The HTTP response code from the external server. Is 200 for OK. Exchange.HTTP_RESPONSE_TEXT String The HTTP response text from the external server. Exchange.HTTP_CHARACTER_ENCODING String Character encoding. Exchange.CONTENT_TYPE String The HTTP content type. Is set on both the IN and OUT message to provide a content type, such as text/html . Exchange.CONTENT_ENCODING String The HTTP content encoding. Is set on both the IN and OUT message to provide a content encoding, such as gzip . 28.6. Message Body Camel will store the HTTP response from the external server on the OUT body. All headers from the IN message will be copied to the OUT message, so headers are preserved during routing. Additionally Camel will add the HTTP response headers as well to the OUT message headers. 28.7. Using System Properties When setting useSystemProperties to true, the HTTP Client will look for the following System Properties and it will use it: ssl.TrustManagerFactory.algorithm javax.net .ssl.trustStoreType javax.net .ssl.trustStore javax.net .ssl.trustStoreProvider javax.net .ssl.trustStorePassword java.home ssl.KeyManagerFactory.algorithm javax.net .ssl.keyStoreType javax.net .ssl.keyStore javax.net .ssl.keyStoreProvider javax.net .ssl.keyStorePassword http.proxyHost http.proxyPort http.nonProxyHosts http.keepAlive http.maxConnections 28.8. Response code Camel will handle according to the HTTP response code: Response code is in the range 100..299, Camel regards it as a success response. Response code is in the range 300..399, Camel regards it as a redirection response and will throw a HttpOperationFailedException with the information. Response code is 400+, Camel regards it as an external server failure and will throw a HttpOperationFailedException with the information. throwExceptionOnFailure The option, throwExceptionOnFailure , can be set to false to prevent the HttpOperationFailedException from being thrown for failed response codes. This allows you to get any response from the remote server. There is a sample below demonstrating this. 28.9. Exceptions HttpOperationFailedException exception contains the following information: The HTTP status code The HTTP status line (text of the status code) Redirect location, if server returned a redirect Response body as a java.lang.String , if server provided a body as response 28.10. Which HTTP method will be used The following algorithm is used to determine what HTTP method should be used: 1. Use method provided as endpoint configuration ( httpMethod ). 2. Use method provided in header ( Exchange.HTTP_METHOD ). 3. GET if query string is provided in header. 4. GET if endpoint is configured with a query string. 5. POST if there is data to send (body is not null ). 6. GET otherwise. 28.11. How to get access to HttpServletRequest and HttpServletResponse You can get access to these two using the Camel type converter system using HttpServletRequest request = exchange.getIn().getBody(HttpServletRequest.class); HttpServletResponse response = exchange.getIn().getBody(HttpServletResponse.class); Note You can get the request and response not just from the processor after the camel-jetty or camel-cxf endpoint. 28.12. Configuring URI to call You can set the HTTP producer's URI directly form the endpoint URI. In the route below, Camel will call out to the external server, oldhost , using HTTP. from("direct:start") .to("http://oldhost"); And the equivalent Spring sample: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="direct:start"/> <to uri="http://oldhost"/> </route> </camelContext> You can override the HTTP endpoint URI by adding a header with the key, Exchange.HTTP_URI , on the message. from("direct:start") .setHeader(Exchange.HTTP_URI, constant("http://newhost")) .to("http://oldhost"); In the sample above Camel will call the http://newhost/ despite the endpoint is configured with http://oldhost/ . If the http endpoint is working in bridge mode, it will ignore the message header of Exchange.HTTP_URI . 28.13. Configuring URI Parameters The http producer supports URI parameters to be sent to the HTTP server. The URI parameters can either be set directly on the endpoint URI or as a header with the key Exchange.HTTP_QUERY on the message. from("direct:start") .to("http://oldhost?order=123&detail=short"); Or options provided in a header: from("direct:start") .setHeader(Exchange.HTTP_QUERY, constant("order=123&detail=short")) .to("http://oldhost"); 28.14. How to set the http method (GET/PATCH/POST/PUT/DELETE/HEAD/OPTIONS/TRACE) to the HTTP producer The HTTP component provides a way to set the HTTP request method by setting the message header. Here is an example: from("direct:start") .setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http.HttpMethods.POST)) .to("http://www.google.com") .to("mock:results"); The method can be written a bit shorter using the string constants: .setHeader("CamelHttpMethod", constant("POST")) And the equivalent Spring sample: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="direct:start"/> <setHeader name="CamelHttpMethod"> <constant>POST</constant> </setHeader> <to uri="http://www.google.com"/> <to uri="mock:results"/> </route> </camelContext> 28.15. Using client timeout - SO_TIMEOUT See the HttpSOTimeoutTest unit test. 28.16. Configuring a Proxy The HTTP component provides a way to configure a proxy. from("direct:start") .to("http://oldhost?proxyAuthHost=www.myproxy.com&proxyAuthPort=80"); There is also support for proxy authentication via the proxyAuthUsername and proxyAuthPassword options. 28.16.1. Using proxy settings outside of URI To avoid System properties conflicts, you can set proxy configuration only from the CamelContext or URI. Java DSL : context.getGlobalOptions().put("http.proxyHost", "172.168.18.9"); context.getGlobalOptions().put("http.proxyPort", "8080"); Spring XML <camelContext> <properties> <property key="http.proxyHost" value="172.168.18.9"/> <property key="http.proxyPort" value="8080"/> </properties> </camelContext> Camel will first set the settings from Java System or CamelContext Properties and then the endpoint proxy options if provided. So you can override the system properties with the endpoint options. There is also a http.proxyScheme property you can set to explicit configure the scheme to use. 28.17. Configuring charset If you are using POST to send data you can configure the charset using the Exchange property: exchange.setProperty(Exchange.CHARSET_NAME, "ISO-8859-1"); 28.17.1. Sample with scheduled poll This sample polls the Google homepage every 10 seconds and write the page to the file message.html : from("timer://foo?fixedRate=true&delay=0&period=10000") .to("http://www.google.com") .setHeader(FileComponent.HEADER_FILE_NAME, "message.html") .to("file:target/google"); 28.17.2. URI Parameters from the endpoint URI In this sample we have the complete URI endpoint that is just what you would have typed in a web browser. Multiple URI parameters can of course be set using the & character as separator, just as you would in the web browser. Camel does no tricks here. // we query for Camel at the Google page template.sendBody("http://www.google.com/search?q=Camel", null); 28.17.3. URI Parameters from the Message Map headers = new HashMap(); headers.put(Exchange.HTTP_QUERY, "q=Camel&lr=lang_en"); // we query for Camel and English language at Google template.sendBody("http://www.google.com/search", null, headers); In the header value above notice that it should not be prefixed with ? and you can separate parameters as usual with the & char. 28.17.4. Getting the Response Code You can get the HTTP response code from the HTTP component by getting the value from the Out message header with Exchange.HTTP_RESPONSE_CODE . Exchange exchange = template.send("http://www.google.com/search", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Exchange.HTTP_QUERY, constant("hl=en&q=activemq")); } }); Message out = exchange.getOut(); int responseCode = out.getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class); 28.18. Disabling Cookies To disable cookies you can set the HTTP Client to ignore cookies by adding the following URI option: 28.19. Basic auth with the streaming message body In order to avoid the NonRepeatableRequestException , you need to do the Preemptive Basic Authentication by adding the option: authenticationPreemptive=true 28.20. Advanced Usage If you need more control over the HTTP producer you should use the HttpComponent where you can set various classes to give you custom behavior. 28.20.1. Setting up SSL for HTTP Client Using the JSSE Configuration Utility The HTTP component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the HTTP component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); HttpComponent httpComponent = getContext().getComponent("https", HttpComponent.class); httpComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters> <to uri="https://127.0.0.1/mail/?sslContextParameters=#sslContextParameters"/> Configuring Apache HTTP Client Directly Basically camel-http component is built on the top of Apache HttpClient . Please refer to SSL/TLS customization for details or have a look into the org.apache.camel.component.http.HttpsServerTestSupport unit test base class. You can also implement a custom org.apache.camel.component.http.HttpClientConfigurer to do some configuration on the http client if you need full control of it. However if you just want to specify the keystore and truststore you can do this with Apache HTTP HttpClientConfigurer , for example: KeyStore keystore = ...; KeyStore truststore = ...; SchemeRegistry registry = new SchemeRegistry(); registry.register(new Scheme("https", 443, new SSLSocketFactory(keystore, "mypassword", truststore))); And then you need to create a class that implements HttpClientConfigurer , and registers https protocol providing a keystore or truststore per example above. Then, from your camel route builder class you can hook it up like so: HttpComponent httpComponent = getContext().getComponent("http", HttpComponent.class); httpComponent.setHttpClientConfigurer(new MyHttpClientConfigurer()); If you are doing this using the Spring DSL, you can specify your HttpClientConfigurer using the URI. For example: <bean id="myHttpClientConfigurer" class="my.https.HttpClientConfigurer"> </bean> <to uri="https://myhostname.com:443/myURL?httpClientConfigurer=myHttpClientConfigurer"/> As long as you implement the HttpClientConfigurer and configure your keystore and truststore as described above, it will work fine. Using HTTPS to authenticate gotchas An end user reported that he had problem with authenticating with HTTPS. The problem was eventually resolved by providing a custom configured org.apache.http.protocol.HttpContext : 1. Create a (Spring) factory for HttpContexts: public class HttpContextFactory { private String httpHost = "localhost"; private String httpPort = 9001; private BasicHttpContext httpContext = new BasicHttpContext(); private BasicAuthCache authCache = new BasicAuthCache(); private BasicScheme basicAuth = new BasicScheme(); public HttpContext getObject() { authCache.put(new HttpHost(httpHost, httpPort), basicAuth); httpContext.setAttribute(ClientContext.AUTH_CACHE, authCache); return httpContext; } // getter and setter } 2. Declare an HttpContext in the Spring application context file: <bean id="myHttpContext" factory-bean="httpContextFactory" factory-method="getObject"/> 3. Reference the context in the http URL: <to uri="https://myhostname.com:443/myURL?httpContext=myHttpContext"/> Using different SSLContextParameters The HTTP component only support one instance of org.apache.camel.support.jsse.SSLContextParameters per component. If you need to use 2 or more different instances, then you need to setup multiple HTTP components as shown below. Where we have 2 components, each using their own instance of sslContextParameters property. <bean id="http-foo" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams1"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean> <bean id="http-bar" class="org.apache.camel.component.http.HttpComponent"> <property name="sslContextParameters" ref="sslContextParams2"/> <property name="x509HostnameVerifier" ref="hostnameVerifier"/> </bean> 28.21. Spring Boot Auto-Configuration When using http with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-http-starter</artifactId> </dependency> The component supports 38 options, which are listed below. Name Description Default Type camel.component.http.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.http.auth-caching-disabled Disables authentication scheme caching. false Boolean camel.component.http.automatic-retries-disabled Disables automatic request recovery and re-execution. false Boolean camel.component.http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.http.client-connection-manager To use a custom and shared HttpClientConnectionManager to manage connections. If this has been configured then this is always used for all endpoints created by this component. The option is a org.apache.http.conn.HttpClientConnectionManager type. HttpClientConnectionManager camel.component.http.connect-timeout Determines the timeout in milliseconds until a connection is established. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 Integer camel.component.http.connection-request-timeout The timeout in milliseconds used when requesting a connection from the connection manager. A timeout value of zero is interpreted as an infinite timeout. A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 Integer camel.component.http.connection-state-disabled Disables connection state tracking. false Boolean camel.component.http.connection-time-to-live The time for connection to live, the time unit is millisecond, the default value is always keep alive. Long camel.component.http.connections-per-route The maximum number of connections per route. 20 Integer camel.component.http.content-compression-disabled Disables automatic content decompression. false Boolean camel.component.http.cookie-management-disabled Disables state (cookie) management. false Boolean camel.component.http.cookie-store To use a custom org.apache.http.client.CookieStore. By default the org.apache.http.impl.client.BasicCookieStore is used which is an in-memory only cookie store. Notice if bridgeEndpoint=true then the cookie store is forced to be a noop cookie store as cookie shouldn't be stored as we are just bridging (eg acting as a proxy). The option is a org.apache.http.client.CookieStore type. CookieStore camel.component.http.copy-headers If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). true Boolean camel.component.http.default-user-agent-disabled Disables the default user agent set by this builder if none has been provided by the user. false Boolean camel.component.http.enabled Whether to enable auto configuration of the http component. This is enabled by default. Boolean camel.component.http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.http.http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. HttpBinding camel.component.http.http-client-configurer To use the custom HttpClientConfigurer to perform configuration of the HttpClient that will be used. The option is a org.apache.camel.component.http.HttpClientConfigurer type. HttpClientConfigurer camel.component.http.http-configuration To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. HttpConfiguration camel.component.http.http-context To use a custom org.apache.http.protocol.HttpContext when executing requests. The option is a org.apache.http.protocol.HttpContext type. HttpContext camel.component.http.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.http.max-total-connections The maximum number of connections. 200 Integer camel.component.http.proxy-auth-domain Proxy authentication domain to use. String camel.component.http.proxy-auth-host Proxy authentication host. String camel.component.http.proxy-auth-method Proxy authentication method to use. String camel.component.http.proxy-auth-nt-host Proxy authentication domain (workstation name) to use with NTML. String camel.component.http.proxy-auth-password Proxy authentication password. String camel.component.http.proxy-auth-port Proxy authentication port. Integer camel.component.http.proxy-auth-username Proxy authentication username. String camel.component.http.redirect-handling-disabled Disables automatic redirect handling. false Boolean camel.component.http.response-payload-streaming-threshold This threshold in bytes controls whether the response payload should be stored in memory as a byte array or be streaming based. Set this to -1 to always use streaming mode. 8192 Integer camel.component.http.skip-request-headers Whether to skip mapping all the Camel headers as HTTP request headers. If there are no data from Camel headers needed to be included in the HTTP request then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false Boolean camel.component.http.skip-response-headers Whether to skip mapping all the HTTP response headers to Camel headers. If there are no data needed from HTTP headers then this can avoid parsing overhead with many object allocations for the JVM garbage collector. false Boolean camel.component.http.socket-timeout Defines the socket timeout in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets). A timeout value of zero is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default). -1 Integer camel.component.http.ssl-context-parameters To configure security using SSLContextParameters. Important: Only one instance of org.apache.camel.support.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.http.x509-hostname-verifier To use a custom X509HostnameVerifier such as DefaultHostnameVerifier or NoopHostnameVerifier. The option is a javax.net.ssl.HostnameVerifier type. HostnameVerifier | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-http</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>",
"http:hostname[:port][/resourceUri][?options]",
"http://httpUri",
"HttpServletRequest request = exchange.getIn().getBody(HttpServletRequest.class); HttpServletResponse response = exchange.getIn().getBody(HttpServletResponse.class);",
"from(\"direct:start\") .to(\"http://oldhost\");",
"<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"http://oldhost\"/> </route> </camelContext>",
"from(\"direct:start\") .setHeader(Exchange.HTTP_URI, constant(\"http://newhost\")) .to(\"http://oldhost\");",
"from(\"direct:start\") .to(\"http://oldhost?order=123&detail=short\");",
"from(\"direct:start\") .setHeader(Exchange.HTTP_QUERY, constant(\"order=123&detail=short\")) .to(\"http://oldhost\");",
"from(\"direct:start\") .setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http.HttpMethods.POST)) .to(\"http://www.google.com\") .to(\"mock:results\");",
".setHeader(\"CamelHttpMethod\", constant(\"POST\"))",
"<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"direct:start\"/> <setHeader name=\"CamelHttpMethod\"> <constant>POST</constant> </setHeader> <to uri=\"http://www.google.com\"/> <to uri=\"mock:results\"/> </route> </camelContext>",
"from(\"direct:start\") .to(\"http://oldhost?proxyAuthHost=www.myproxy.com&proxyAuthPort=80\");",
"context.getGlobalOptions().put(\"http.proxyHost\", \"172.168.18.9\"); context.getGlobalOptions().put(\"http.proxyPort\", \"8080\");",
"<camelContext> <properties> <property key=\"http.proxyHost\" value=\"172.168.18.9\"/> <property key=\"http.proxyPort\" value=\"8080\"/> </properties> </camelContext>",
"exchange.setProperty(Exchange.CHARSET_NAME, \"ISO-8859-1\");",
"from(\"timer://foo?fixedRate=true&delay=0&period=10000\") .to(\"http://www.google.com\") .setHeader(FileComponent.HEADER_FILE_NAME, \"message.html\") .to(\"file:target/google\");",
"// we query for Camel at the Google page template.sendBody(\"http://www.google.com/search?q=Camel\", null);",
"Map headers = new HashMap(); headers.put(Exchange.HTTP_QUERY, \"q=Camel&lr=lang_en\"); // we query for Camel and English language at Google template.sendBody(\"http://www.google.com/search\", null, headers);",
"Exchange exchange = template.send(\"http://www.google.com/search\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Exchange.HTTP_QUERY, constant(\"hl=en&q=activemq\")); } }); Message out = exchange.getOut(); int responseCode = out.getHeader(Exchange.HTTP_RESPONSE_CODE, Integer.class);",
"httpClient.cookieSpec=ignore",
"KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); HttpComponent httpComponent = getContext().getComponent(\"https\", HttpComponent.class); httpComponent.setSslContextParameters(scp);",
"<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"https://127.0.0.1/mail/?sslContextParameters=#sslContextParameters\"/>",
"KeyStore keystore = ...; KeyStore truststore = ...; SchemeRegistry registry = new SchemeRegistry(); registry.register(new Scheme(\"https\", 443, new SSLSocketFactory(keystore, \"mypassword\", truststore)));",
"HttpComponent httpComponent = getContext().getComponent(\"http\", HttpComponent.class); httpComponent.setHttpClientConfigurer(new MyHttpClientConfigurer());",
"<bean id=\"myHttpClientConfigurer\" class=\"my.https.HttpClientConfigurer\"> </bean> <to uri=\"https://myhostname.com:443/myURL?httpClientConfigurer=myHttpClientConfigurer\"/>",
"public class HttpContextFactory { private String httpHost = \"localhost\"; private String httpPort = 9001; private BasicHttpContext httpContext = new BasicHttpContext(); private BasicAuthCache authCache = new BasicAuthCache(); private BasicScheme basicAuth = new BasicScheme(); public HttpContext getObject() { authCache.put(new HttpHost(httpHost, httpPort), basicAuth); httpContext.setAttribute(ClientContext.AUTH_CACHE, authCache); return httpContext; } // getter and setter }",
"<bean id=\"myHttpContext\" factory-bean=\"httpContextFactory\" factory-method=\"getObject\"/>",
"<to uri=\"https://myhostname.com:443/myURL?httpContext=myHttpContext\"/>",
"<bean id=\"http-foo\" class=\"org.apache.camel.component.http.HttpComponent\"> <property name=\"sslContextParameters\" ref=\"sslContextParams1\"/> <property name=\"x509HostnameVerifier\" ref=\"hostnameVerifier\"/> </bean> <bean id=\"http-bar\" class=\"org.apache.camel.component.http.HttpComponent\"> <property name=\"sslContextParameters\" ref=\"sslContextParams2\"/> <property name=\"x509HostnameVerifier\" ref=\"hostnameVerifier\"/> </bean>",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-http-starter</artifactId> </dependency>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-http-component-starter |
Chapter 105. OtherArtifact schema reference | Chapter 105. OtherArtifact schema reference Used in: Plugin Property Property type Description type string Must be other . url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. fileName string Name under which the artifact will be stored. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-OtherArtifact-reference |
Chapter 2. Running Java applications with Shenandoah garbage collector | Chapter 2. Running Java applications with Shenandoah garbage collector You can run your Java application with the Shenandoah garbage collector (GC). Prerequisites Installed Red Hat build of OpenJDK. See Installing Red Hat build of OpenJDK 8 on Red Hat Enterprise Linux in the Installing and using Red Hat build of OpenJDK 8 on RHEL guide. Procedure Run your Java application with Shenandoah GC by using the -XX:+UseShenandoahGC JVM option. USD java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC | [
"java <PATH_TO_YOUR_APPLICATION> -XX:+UseShenandoahGC"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk/running-application-with-shenandoah-gc |
Chapter 14. Scheduling resources | Chapter 14. Scheduling resources 14.1. Using node selectors to move logging resources A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 14.1.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 14.1.2. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. In the example configuration, all Loki pods are moved to nodes containing the node-role.kubernetes.io/infra: "" label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 14.1.3. Configuring resources and scheduling for logging collectors Administrators can modify the resources or scheduling of the collector by creating a ClusterLogging custom resource (CR) that is in the same namespace and has the same name as the ClusterLogForwarder CR that it supports. The applicable stanzas for the ClusterLogging CR when using multiple log forwarders in a deployment are managementState and collection . All other stanzas are ignored. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator version 5.8 or newer. You have created a ClusterLogForwarder CR. Procedure Create a ClusterLogging CR that supports your existing ClusterLogForwarder CR: Example ClusterLogging CR YAML apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: "Managed" collection: type: "vector" tolerations: - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed # ... 1 The name must be the same name as the ClusterLogForwarder CR. 2 The namespace must be the same namespace as the ClusterLogForwarder CR. Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 14.1.4. Viewing logging collector pods You can view the logging collector pods and the corresponding nodes that they are running on. Procedure Run the following command in a project to view the logging collector pods and their details: USD oc get pods --selector component=collector -o wide -n <project_name> Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 14.1.5. Additional resources Placing pods on specific nodes using node selectors 14.2. Using taints and tolerations to control logging pod placement Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 14.2.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 14.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 14.2.2. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. In the example configuration, all Loki pods are moved to nodes containing the node-role.kubernetes.io/infra: "" label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 14.2.3. Using tolerations to control log collector pod placement By default, log collector pods have the following tolerations configuration: apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: # ... collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists # ... Prerequisites You have installed the Red Hat OpenShift Logging Operator and OpenShift CLI ( oc ). Procedure Add a taint to a node where you want logging collector pods to schedule logging collector pods by running the following command: USD oc adm taint nodes <node_name> <key>=<value>:<effect> Example command USD oc adm taint nodes node1 collector=node:NoExecute This example places a taint on node1 that has key collector , value node , and taint effect NoExecute . You must use the NoExecute taint effect. NoExecute schedules only pods that match the taint and removes existing pods that do not match. Edit the collection stanza of the ClusterLogging custom resource (CR) to configure a toleration for the logging collector pods: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi # ... 1 Specify the key that you added to the node. 2 Specify the Exists operator to require the key / value / effect parameters to match. 3 Specify the NoExecute effect. 4 Optionally, specify the tolerationSeconds parameter to set how long a pod can remain bound to a node before being evicted. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto node1 . 14.2.4. Configuring resources and scheduling for logging collectors Administrators can modify the resources or scheduling of the collector by creating a ClusterLogging custom resource (CR) that is in the same namespace and has the same name as the ClusterLogForwarder CR that it supports. The applicable stanzas for the ClusterLogging CR when using multiple log forwarders in a deployment are managementState and collection . All other stanzas are ignored. Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator version 5.8 or newer. You have created a ClusterLogForwarder CR. Procedure Create a ClusterLogging CR that supports your existing ClusterLogForwarder CR: Example ClusterLogging CR YAML apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: "Managed" collection: type: "vector" tolerations: - key: "logging" operator: "Exists" effect: "NoExecute" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed # ... 1 The name must be the same name as the ClusterLogForwarder CR. 2 The namespace must be the same namespace as the ClusterLogForwarder CR. Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 14.2.5. Viewing logging collector pods You can view the logging collector pods and the corresponding nodes that they are running on. Procedure Run the following command in a project to view the logging collector pods and their details: USD oc get pods --selector component=collector -o wide -n <project_name> Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none> 14.2.6. Additional resources Controlling pod placement using node taints | [
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: v1 kind: Pod metadata: name: collector-example namespace: openshift-logging spec: collection: type: vector tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoSchedule key: node.kubernetes.io/disk-pressure operator: Exists - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/pid-pressure operator: Exists - effect: NoSchedule key: node.kubernetes.io/unschedulable operator: Exists",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: vector tolerations: - key: collector 1 operator: Exists 2 effect: NoExecute 3 tolerationSeconds: 6000 4 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: <name> 1 namespace: <namespace> 2 spec: managementState: \"Managed\" collection: type: \"vector\" tolerations: - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 1Gi requests: cpu: 100m memory: 1Gi nodeSelector: collector: needed",
"oc apply -f <filename>.yaml",
"oc get pods --selector component=collector -o wide -n <project_name>",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES collector-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> collector-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> collector-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> collector-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> collector-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/scheduling-resources |
Chapter 8. Installing a cluster on GCP into a shared VPC | Chapter 8. Installing a cluster on GCP into a shared VPC In OpenShift Container Platform version 4.16, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation . The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have a GCP host project which contains a shared VPC network. You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation . You have a GCP service account that has the required GCP permissions in both the host and service projects. 8.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.5. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml file and modify it so that the cluster uses the correct VPC networks, DNS zones, and project names. 8.5.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 8.5.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 8.5.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 8.5.4. Sample customized install-config.yaml file for shared VPC installation There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml file which demonstrates these fields. Important This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 10 1 credentialsMode must be set to Passthrough or Manual . See the "Prerequisites" section for the required GCP permissions that your service account must have. 2 The name of the subnet in the shared VPC for compute machines to use. 3 The name of the subnet in the shared VPC for control plane machines to use. 4 The name of the shared VPC. 5 The name of the host project where the shared VPC exists. 6 The name of the GCP project where you want to install the cluster. 7 8 9 Optional. One or more network tags to apply to compute machines, control plane machines, or all machines. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. 8.5.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 8.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.1. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 8.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 8.2. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 8.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_gcp/installing-gcp-shared-vpc |
5.224. oprofile | 5.224. oprofile 5.224.1. RHBA-2012:0966 - oprofile bug fix and enhancement update Updated oprofile packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OProfile is a system-wide profiler for Linux systems. The profiling runs transparently in the background and profile data can be collected at any time. OProfile uses the hardware performance counters provided on many processors, and can use the Real Time Clock (RTC) for profiling on processors without counters. The oprofile packages have been upgraded to upstream version 0.9.7, which provides a number of bug fixes and enhancements over the version. (BZ# 739142 ) Bug Fix BZ# 748789 Under certain circumstances, the "opannotate" and "opreport" commands reported no results. With this update, this problem has been fixed so that these commands work as expected. All OProfile users are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/oprofile |
Chapter 5. BuildRequest [build.openshift.io/v1] | Chapter 5. BuildRequest [build.openshift.io/v1] Description BuildRequest is the resource used to pass parameters to build generator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binary object BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. dockerStrategyOptions object DockerStrategyOptions contains extra strategy options for container image builds env array (EnvVar) env contains additional environment variables you want to pass into a builder container. from ObjectReference from is the reference to the ImageStreamTag that triggered the build. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds lastVersion integer lastVersion (optional) is the LastVersion of the BuildConfig that was used to generate the build. If the BuildConfig in the generator doesn't match, a build will not be generated. metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision object SourceRevision is the revision or commit information from the source for the build sourceStrategyOptions object SourceStrategyOptions contains extra strategy options for Source builds triggeredBy array triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. triggeredBy[] object BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. triggeredByImage ObjectReference triggeredByImage is the Image that triggered this build. 5.1.1. .binary Description BinaryBuildSource describes a binary file to be used for the Docker and Source build strategies, where the file will be extracted and used as the build source. Type object Property Type Description asFile string asFile indicates that the provided binary input should be considered a single file within the build input. For example, specifying "webapp.war" would place the provided binary as /webapp.war for the builder. If left empty, the Docker and Source build strategies assume this file is a zip, tar, or tar.gz file and extract it as the source. The custom strategy receives this binary as standard input. This filename may not contain slashes or be '..' or '.'. 5.1.2. .dockerStrategyOptions Description DockerStrategyOptions contains extra strategy options for container image builds Type object Property Type Description buildArgs array (EnvVar) Args contains any build arguments that are to be passed to Docker. See https://docs.docker.com/engine/reference/builder/#/arg for more details noCache boolean noCache overrides the docker-strategy noCache option in the build config 5.1.3. .revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.4. .revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.5. .revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.6. .revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.7. .sourceStrategyOptions Description SourceStrategyOptions contains extra strategy options for Source builds Type object Property Type Description incremental boolean incremental overrides the source-strategy incremental option in the build config 5.1.8. .triggeredBy Description triggeredBy describes which triggers started the most recent update to the build configuration and contains information about those triggers. Type array 5.1.9. .triggeredBy[] Description BuildTriggerCause holds information about a triggered build. It is used for displaying build trigger data for each build and build configuration in oc describe. It is also used to describe which triggers led to the most recent update in the build configuration. Type object Property Type Description bitbucketWebHook object BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. genericWebHook object GenericWebHookCause holds information about a generic WebHook that triggered a build. githubWebHook object GitHubWebHookCause has information about a GitHub webhook that triggered a build. gitlabWebHook object GitLabWebHookCause has information about a GitLab webhook that triggered a build. imageChangeBuild object ImageChangeCause contains information about the image that triggered a build message string message is used to store a human readable message for why the build was triggered. E.g.: "Manually triggered by user", "Configuration change",etc. 5.1.10. .triggeredBy[].bitbucketWebHook Description BitbucketWebHookCause has information about a Bitbucket webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.11. .triggeredBy[].bitbucketWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.12. .triggeredBy[].bitbucketWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.13. .triggeredBy[].bitbucketWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.14. .triggeredBy[].bitbucketWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.15. .triggeredBy[].genericWebHook Description GenericWebHookCause holds information about a generic WebHook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.16. .triggeredBy[].genericWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.17. .triggeredBy[].genericWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.18. .triggeredBy[].genericWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.19. .triggeredBy[].genericWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.20. .triggeredBy[].githubWebHook Description GitHubWebHookCause has information about a GitHub webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string secret is the obfuscated webhook secret that triggered a build. 5.1.21. .triggeredBy[].githubWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.22. .triggeredBy[].githubWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.23. .triggeredBy[].githubWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.24. .triggeredBy[].githubWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.25. .triggeredBy[].gitlabWebHook Description GitLabWebHookCause has information about a GitLab webhook that triggered a build. Type object Property Type Description revision object SourceRevision is the revision or commit information from the source for the build secret string Secret is the obfuscated webhook secret that triggered a build. 5.1.26. .triggeredBy[].gitlabWebHook.revision Description SourceRevision is the revision or commit information from the source for the build Type object Required type Property Type Description git object GitSourceRevision is the commit information from a git source for a build type string type of the build source, may be one of 'Source', 'Dockerfile', 'Binary', or 'Images' 5.1.27. .triggeredBy[].gitlabWebHook.revision.git Description GitSourceRevision is the commit information from a git source for a build Type object Property Type Description author object SourceControlUser defines the identity of a user of source control commit string commit is the commit hash identifying a specific commit committer object SourceControlUser defines the identity of a user of source control message string message is the description of a specific commit 5.1.28. .triggeredBy[].gitlabWebHook.revision.git.author Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.29. .triggeredBy[].gitlabWebHook.revision.git.committer Description SourceControlUser defines the identity of a user of source control Type object Property Type Description email string email of the source control user name string name of the source control user 5.1.30. .triggeredBy[].imageChangeBuild Description ImageChangeCause contains information about the image that triggered a build Type object Property Type Description fromRef ObjectReference fromRef contains detailed information about an image that triggered a build. imageID string imageID is the ID of the image that triggered a new build. 5.2. API endpoints The following API endpoints are available: /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone POST : create clone of a Build /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate POST : create instantiate of a BuildConfig 5.2.1. /apis/build.openshift.io/v1/namespaces/{namespace}/builds/{name}/clone Table 5.1. Global path parameters Parameter Type Description name string name of the BuildRequest namespace string object name and auth scope, such as for teams and projects Table 5.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create clone of a Build Table 5.3. Body parameters Parameter Type Description body BuildRequest schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK BuildRequest schema 201 - Created BuildRequest schema 202 - Accepted BuildRequest schema 401 - Unauthorized Empty 5.2.2. /apis/build.openshift.io/v1/namespaces/{namespace}/buildconfigs/{name}/instantiate Table 5.5. Global path parameters Parameter Type Description name string name of the BuildRequest namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create instantiate of a BuildConfig Table 5.7. Body parameters Parameter Type Description body BuildRequest schema Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Build schema 201 - Created Build schema 202 - Accepted Build schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/workloads_apis/buildrequest-build-openshift-io-v1 |
Chapter 17. Setting up distributed tracing | Chapter 17. Setting up distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in JMX metrics , as well as the component loggers. Support for tracing is built in to the following Kafka components: Kafka Connect MirrorMaker MirrorMaker 2 AMQ Streams Kafka Bridge Tracing is not supported for Kafka brokers. You add tracing configuration to the properties file of the component. To enable tracing, you set environment variables and add the library of the tracing system to the Kafka classpath. For Jaeger tracing, you can add tracing artifacts for the following systems: OpenTelemetry with the Jaeger Exporter OpenTracing with Jaeger Note Support for OpenTracing is deprecated. The Jaeger clients are now retired and the OpenTracing project archived. As such, we cannot guarantee their support for future Kafka versions. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code. When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Note Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this content. 17.1. Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for MirrorMaker, MirrorMaker 2, and Kafka Connect: Enable tracing for Kafka Connect Enable tracing for MirrorMaker 2 Enable tracing for MirrorMaker Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Note For information on enabling tracing for the Kafka Bridge, see Using the AMQ Streams Kafka Bridge . 17.2. Tracing options Use OpenTelemetry or OpenTracing (deprecated) with the Jaeger tracing system. OpenTelemetry and OpenTracing provide API specifications that are independent from the tracing or monitoring system. You use the APIs to instrument application code for tracing. Instrumented applications generate traces for individual requests across the distributed system. Traces are composed of spans that define specific units of work over time. Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the tracing APIs and provides client libraries for instrumentation. The Jaeger user interface allows you to query, filter, and analyze trace data. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation OpenTracing documentation 17.3. Environment variables for tracing Use environment variables when you are enabling tracing for Kafka components or initializing a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation and OpenTracing documentation . The following tables describe the key environment variables for setting up a tracer. Table 17.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the Jaeger tracing service for OpenTelemetry. OTEL_EXPORTER_JAEGER_ENDPOINT Yes The exporter used for tracing. OTEL_TRACES_EXPORTER Yes The exporter used for tracing. Set to otlp by default. If using Jaeger tracing, you need to set this environment variable as jaeger . If you are using another tracing implementation, specify the exporter used . Table 17.2. OpenTracing environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. 17.4. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. You can enable tracing that uses OpenTelemetry or OpenTracing. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor Properties for OpenTracing producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka Connect script. Save the configuration file. Set the environment variables for tracing. Start Kafka Connect in standalone or distributed mode with the configuration file as a parameter (plus any connector properties): Running Kafka Connect in standalone mode su - kafka /opt/kafka/bin/connect-standalone.sh \ /opt/kafka/config/connect-standalone.properties \ connector1.properties \ [connector2.properties ...] Running Kafka Connect in distributed mode su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties The internal consumers and producers of Kafka Connect are now enabled for tracing. 17.5. Enabling tracing for MirrorMaker 2 Enable distributed tracing for MirrorMaker 2 by defining the Interceptor properties in the MirrorMaker 2 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2 component. You can enable tracing that uses OpenTelemetry or OpenTracing. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the opt/kafka/config/connect-mirror-maker.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor Properties for OpenTracing header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor ByteArrayConverter prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker 2 script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker 2 with the producer and consumer configuration files as parameters: su - kafka /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties The internal consumers and producers of MirrorMaker 2 are now enabled for tracing. 17.6. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. You can enable tracing that uses OpenTelemetry or OpenTracing. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer tracing in the /opt/kafka/config/producer.properties file. Add the following tracing interceptor property: Producer property for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor Producer property for OpenTracing producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor Save the configuration file. Configure consumer tracing in the /opt/kafka/config/consumer.properties file. Add the following tracing interceptor property: Consumer property for OpenTelemetry consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor Consumer property for OpenTracing consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker with the producer and consumer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh \ --producer.config /opt/kafka/config/producer.properties \ --consumer.config /opt/kafka/config/consumer.properties \ --num.streams=2 The internal consumers and producers of MirrorMaker are now enabled for tracing. 17.7. Initializing tracing for Kafka clients Initialize a tracer, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. You can initialize a tracer for OpenTracing or OpenTelemetry. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-{OpenTelemetryKafkaClient}</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.19.0.redhat-00002</version> </dependency> Dependencies for OpenTracing <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.8.1.redhat-00002</version> </dependency> <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00006</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Creating a tracer for OpenTracing Tracer tracer = Configuration.fromEnv().getTracer(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 17.8, "Instrumenting producers and consumers for tracing" Section 17.9, "Instrumenting Kafka Streams applications for tracing" 17.8. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry and OpenTracing instrumentation projects provide classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Decorator instrumentation is different for OpenTelemetry and OpenTracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Interceptor instrumentation is the same for OpenTelemetry and OpenTracing. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Example decorator instrumentation for OpenTracing //producer instance KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); TracingKafkaProducer.send(...) //consumer instance KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); tracingConsumer.subscribe(Collections.singletonList("mytopic")); ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 17.9. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. The OpenTracing instrumentation project provides a TracingKafkaClientSupplier class that supports instrumentation of Kafka Streams. You create a wrapped instance of the TracingKafkaClientSupplier supplier interface, which provides tracing instrumentation for Kafka Streams. For OpenTelemetry, the process is the same but you need to create a custom TracingKafkaClientSupplier class to provide the support. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 17.10. Specifying tracing systems with OpenTelemetry Instead of the default Jaeger system, you can specify other tracing systems that are supported by OpenTelemetry. If you want to use another tracing system with OpenTelemetry, do the following: Add the library of the tracing system to the Kafka classpath. Add the name of the tracing system as an additional exporter environment variable. Additional environment variable when not using Jaeger OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2 1 The name of the tracing system. In this example, Zipkin is specified. 2 The endpoint of the specific selected exporter that listens for spans. In this example, a Zipkin endpoint is specified. Additional resources OpenTelemetry exporter values 17.11. Custom span names A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . 17.11.1. Specifying span names for OpenTelemetry Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build(); 17.11.2. Specifying span names for OpenTracing To specify custom span names for OpenTracing, pass a BiFunction object as an additional argument when instrumenting producers and consumers. For more information on built-in names and specifying custom span names to instrument client application code in a decorator pattern, see the OpenTracing Apache Kafka client instrumentation . | [
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties connector1.properties [connector2.properties ...]",
"su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties",
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --producer.config /opt/kafka/config/producer.properties --consumer.config /opt/kafka/config/consumer.properties --num.streams=2",
"<dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-{OpenTelemetryKafkaClient}</artifactId> <version>1.19.0.redhat-00002</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.19.0.redhat-00002</version> </dependency>",
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.8.1.redhat-00002</version> </dependency> <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.15.redhat-00006</version> </dependency>",
"OpenTelemetry ot = GlobalOpenTelemetry.get();",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"//producer instance KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); TracingKafkaProducer.send(...) //consumer instance KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); tracingConsumer.subscribe(Collections.singletonList(\"mytopic\")); ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);",
"consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());",
"OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2",
"//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/using_amq_streams_on_rhel/assembly-distributed-tracing-str |
Chapter 15. Mitigating security threats | Chapter 15. Mitigating security threats Security vulnerabilities exist in any authentication server. See the Internet Engineering Task Force's (IETF) OAuth 2.0 Threat Model and the OAuth 2.0 Security Best Current Practice for more information. 15.1. Host Red Hat build of Keycloak uses the public hostname in several ways, such as within token issuer fields and URLs in password reset emails. By default, the hostname derives from request headers. No validation exists to ensure a hostname is valid. If you are not using a load balancer, or proxy, with Red Hat build of Keycloak to prevent invalid host headers, configure the acceptable hostnames. The hostname's Service Provider Interface (SPI) provides a way to configure the hostname for requests. You can use this built-in provider to set a fixed URL for frontend requests while allowing backend requests based on the request URI. If the built-in provider does not have the required capability, you can develop a customized provider. 15.2. Admin endpoints and Admin Console Red Hat build of Keycloak exposes the administrative REST API and the web console on the same port as non-administrative usage. Do not expose administrative endpoints externally if external access is not necessary. 15.3. Brute force attacks A brute force attack attempts to guess a user's password by trying to log in multiple times. Red Hat build of Keycloak has brute force detection capabilities and can temporarily disable a user account if the number of login failures exceeds a specified threshold. Note Red Hat build of Keycloak disables brute force detection by default. Enable this feature to protect against brute force attacks. Procedure To enable this protection: Click Realm Settings in the menu Click the Security Defenses tab. Click the Brute Force Detection tab. Brute force detection Red Hat build of Keycloak can deploy permanent lockout and temporary lockout actions when it detects an attack. Permanent lockout disables a user account until an administrator re-enables it. Temporary lockout disables a user account for a specific period of time. The time period that the account is disabled increases as the attack continues. Note When a user is temporarily locked and attempts to log in, Red Hat build of Keycloak displays the default Invalid username or password error message. This message is the same error message as the message displayed for an invalid username or invalid password to ensure the attacker is unaware the account is disabled. Common Parameters Name Description Default Max Login Failures The maximum number of login failures. 30 failures. Quick Login Check Milliseconds The minimum time between login attempts. 1000 milliseconds. Minimum Quick Login Wait The minimum time the user is disabled when login attempts are quicker than Quick Login Check Milliseconds . 1 minute. Permanent Lockout Flow On successful login Reset count On failed login Increment count If count greater than Max Login Failures Permanently disable user Else if the time between this failure and the last failure is less than Quick Login Check Milliseconds Temporarily disable user for Minimum Quick Login Wait When Red Hat build of Keycloak disables a user, the user cannot log in until an administrator enables the user. Enabling an account resets the count . Temporary Lockout Parameters Name Description Default Wait Increment The time added to the time a user is temporarily disabled when the user's login attempts exceed Max Login Failures . 1 minute. Max Wait The maximum time a user is temporarily disabled. 15 minutes. Failure Reset Time The time when the failure count resets. The timer runs from the last failed login. 12 hours. Temporary Lockout Algorithm On successful login Reset count On failed login If the time between this failure and the last failure is greater than Failure Reset Time Reset count Increment count Calculate wait using Wait Increment * ( count / Max Login Failures ). The division is an integer division rounded down to a whole number If wait equals 0 and the time between this failure and the last failure is less than Quick Login Check Milliseconds , set wait to Minimum Quick Login Wait . Temporarily disable the user for the smallest of wait and Max Wait seconds count does not increment when a temporarily disabled account commits a login failure. The downside of Red Hat build of Keycloak brute force detection is that the server becomes vulnerable to denial of service attacks. When implementing a denial of service attack, an attacker can attempt to log in by guessing passwords for any accounts it knows and eventually causing Red Hat build of Keycloak to disable the accounts. Consider using intrusion prevention software (IPS). Red Hat build of Keycloak logs every login failure and client IP address failure. You can point the IPS to the Red Hat build of Keycloak server's log file, and the IPS can modify firewalls to block connections from these IP addresses. 15.3.1. Password policies Ensure you have a complex password policy to force users to choose complex passwords. See the Password Policies chapter for more information. Prevent password guessing by setting up the Red Hat build of Keycloak server to use one-time-passwords. 15.4. Read-only user attributes Typical users who are stored in Red Hat build of Keycloak have various attributes related to their user profiles. Such attributes include email, firstName or lastName. However users may also have attributes, which are not typical profile data, but rather metadata. The metadata attributes usually should be read-only for the users and the typical users never should have a way to update those attributes from the Red Hat build of Keycloak user interface or Account REST API. Some of the attributes should be even read-only for the administrators when creating or updating user with the Admin REST API. The metadata attributes are usually attributes from those groups: Various links or metadata related to the user storage providers. For example in case of the LDAP integration, the LDAP_ID attribute contains the ID of the user in the LDAP server. Metadata provisioned by User Storage. For example createdTimestamp provisioned from the LDAP should be always read-only by user or administrator. Metadata related to various authenticators. For example KERBEROS_PRINCIPAL attribute can contain the kerberos principal name of the particular user. Similarly attribute usercertificate can contain metadata related to binding the user with the data from the X.509 certificate, which is used typically when X.509 certificate authentication is enabled. Metadata related to the identificator of users by the applications/clients. For example saml.persistent.name.id.for.my_app can contain SAML NameID, which will be used by the client application my_app as the identifier of the user. Metadata related to the authorization policies, which are used for the attribute based access control (ABAC). Values of those attributes may be used for the authorization decisions. Hence it is important that those attributes cannot be updated by the users. From the long term perspective, Red Hat build of Keycloak will have a proper User Profile SPI, which will allow fine-grained configuration of every user attribute. Currently this capability is not fully available yet. So Red Hat build of Keycloak has the internal list of user attributes, which are read-only for the users and read-only for the administrators configured at the server level. This is the list of the read-only attributes, which are used internally by the Red Hat build of Keycloak default providers and functionalities and hence are always read-only: For users: KERBEROS_PRINCIPAL , LDAP_ID , LDAP_ENTRY_DN , CREATED_TIMESTAMP , createTimestamp , modifyTimestamp , userCertificate , saml.persistent.name.id.for.* , ENABLED , EMAIL_VERIFIED For administrators: KERBEROS_PRINCIPAL , LDAP_ID , LDAP_ENTRY_DN , CREATED_TIMESTAMP , createTimestamp , modifyTimestamp System administrators have a way to add additional attributes to this list. The configuration is currently available at the server level. You can add this configuration by using the spi-user-profile-declarative-user-profile-read-only-attributes and `spi-user-profile-declarative-user-profile-admin-read-only-attributes options. For example: kc.[sh|bat] start --spi-user-profile-declarative-user-profile-read-only-attributes=foo,bar* For this example, users and administrators would not be able to update attribute foo . Users would not be able to edit any attributes starting with the bar . So for example bar or barrier . Configuration is case-insensitive, so attributes like FOO or BarRier will be denied as well for this example. The wildcard character * is supported only at the end of the attribute name, so the administrator can effectively deny all the attributes starting with the specified character. The * in the middle of the attribute is considered as a normal character. 15.5. Clickjacking Clickjacking is a technique of tricking users into clicking on a user interface element different from what users perceive. A malicious site loads the target site in a transparent iFrame, overlaid on top of a set of dummy buttons placed directly under important buttons on the target site. When a user clicks a visible button, they are clicking a button on the hidden page. An attacker can steal a user's authentication credentials and access their resources by using this method. By default, every response by Red Hat build of Keycloak sets some specific HTTP headers that can prevent this from happening. Specifically, it sets X-Frame-Options and Content-Security-Policy . You should take a look at the definition of both of these headers as there is a lot of fine-grain browser access you can control. Procedure In the Admin Console, you can specify the values of the X-Frame-Options and Content-Security-Policy headers. Click the Realm Settings menu item. Click the Security Defenses tab. Security Defenses By default, Red Hat build of Keycloak only sets up a same-origin policy for iframes. 15.6. SSL/HTTPS requirement OAuth 2.0/OpenID Connect uses access tokens for security. Attackers can scan your network for access tokens and use them to perform malicious operations for which the token has permission. This attack is known as a man-in-the-middle attack. Use SSL/HTTPS for communication between the Red Hat build of Keycloak auth server and the clients Red Hat build of Keycloak secures to prevent man-in-the-middle attacks. Red Hat build of Keycloak has three modes for SSL/HTTPS . SSL is complex to set up, so Red Hat build of Keycloak allows non-HTTPS communication over private IP addresses such as localhost, 192.168.x.x, and other private IP addresses. In production, ensure you enable SSL and SSL is compulsory for all operations. On the adapter/client-side, you can disable the SSL trust manager. The trust manager ensures the client's identity that Red Hat build of Keycloak communicates with is valid and ensures the DNS domain name against the server's certificate. In production, ensure that each of your client adapters uses a truststore to prevent DNS man-in-the-middle attacks. 15.7. CSRF attacks A Cross-site request forgery (CSRF) attack uses HTTP requests from users that websites have already authenticated. Any site using cookie-based authentication is vulnerable to CSRF attacks. You can mitigate these attacks by matching a state cookie against a posted form or query parameter. The OAuth 2.0 login specification requires that a state cookie matches against a transmitted state parameter. Red Hat build of Keycloak fully implements this part of the specification, so all logins are protected. The Red Hat build of Keycloak Admin Console is a JavaScript/HTML5 application that makes REST calls to the backend Red Hat build of Keycloak admin REST API. These calls all require bearer token authentication and consist of JavaScript Ajax calls, so CSRF is impossible. You can configure the admin REST API to validate the CORS origins. The user account management section in Red Hat build of Keycloak can be vulnerable to CSRF. To prevent CSRF attacks, Red Hat build of Keycloak sets a state cookie and embeds the value of this cookie in hidden form fields or query parameters within action links. Red Hat build of Keycloak checks the query/form parameter against the state cookie to verify that the user makes the call. 15.8. Unspecific redirect URIs Make your registered redirect URIs as specific as feasible. Registering vague redirect URIs for Authorization Code Flows can allow malicious clients to impersonate another client with broader access. Impersonation can happen if two clients live under the same domain, for example. 15.9. FAPI compliance To make sure that Red Hat build of Keycloak server will validate your client to be more secure and FAPI compliant, you can configure client policies for the FAPI support. Details are described in the FAPI section of Securing Applications and Services Guide . Among other things, this ensures some security best practices described above like SSL required for clients, secure redirect URI used and more of similar best practices. 15.10. Compromised access and refresh tokens Red Hat build of Keycloak includes several actions to prevent malicious actors from stealing access tokens and refresh tokens. The crucial action is to enforce SSL/HTTPS communication between Red Hat build of Keycloak and its clients and applications. Red Hat build of Keycloak does not enable SSL by default. Another action to mitigate damage from leaked access tokens is to shorten the token's lifespans. You can specify token lifespans within the timeouts page . Short lifespans for access tokens force clients and applications to refresh their access tokens after a short time. If an admin detects a leak, the admin can log out all user sessions to invalidate these refresh tokens or set up a revocation policy. Ensure refresh tokens always stay private to the client and are never transmitted. You can mitigate damage from leaked access tokens and refresh tokens by issuing these tokens as holder-of-key tokens. See OAuth 2.0 Mutual TLS Client Certificate Bound Access Token for more information. If an access token or refresh token is compromised, access the Admin Console and push a not-before revocation policy to all applications. Pushing a not-before policy ensures that any tokens issued before that time become invalid. Pushing a new not-before policy ensures that applications must download new public keys from Red Hat build of Keycloak and mitigate damage from a compromised realm signing key. See the keys chapter for more information. You can disable specific applications, clients, or users if they are compromised. 15.11. Compromised authorization code For the OIDC Auth Code Flow , Red Hat build of Keycloak generates a cryptographically strong random value for its authorization codes. An authorization code is used only once to obtain an access token. On the timeouts page in the Admin Console, you can specify the length of time an authorization code is valid. Ensure that the length of time is less than 10 seconds, which is long enough for a client to request a token from the code. You can also defend against leaked authorization codes by applying Proof Key for Code Exchange (PKCE) to clients. 15.12. Open redirectors An open redirector is an endpoint using a parameter to automatically redirect a user agent to the location specified by the parameter value without validation. An attacker can use the end-user authorization endpoint and the redirect URI parameter to use the authorization server as an open redirector, using a user's trust in an authorization server to launch a phishing attack. Red Hat build of Keycloak requires that all registered applications and clients register at least one redirection URI pattern. When a client requests that Red Hat build of Keycloak performs a redirect, Red Hat build of Keycloak checks the redirect URI against the list of valid registered URI patterns. Clients and applications must register as specific a URI pattern as possible to mitigate open redirector attacks. If an application requires a non http(s) custom scheme, it should be an explicit part of the validation pattern (for example custom:/app/* ). For security reasons a general pattern like * does not cover non http(s) schemes. 15.13. Password database compromised Red Hat build of Keycloak does not store passwords in raw text but as hashed text, using the PBKDF2 hashing algorithm. Red Hat build of Keycloak performs 27,500 hashing iterations, the number of iterations recommended by the security community. This number of hashing iterations can adversely affect performance as PBKDF2 hashing uses a significant amount of CPU resources. 15.14. Limiting scope By default, new client applications have unlimited role scope mappings . Every access token for that client contains all permissions that the user has. If an attacker compromises the client and obtains the client's access tokens, each system that the user can access is compromised. Limit the roles of an access token by using the Scope menu for each client. Alternatively, you can set role scope mappings at the Client Scope level and assign Client Scopes to your client by using the Client Scope menu . 15.15. Limit token audience In environments with low levels of trust among services, limit the audiences on the token. See the OAuth2 Threat Model and the Audience Support section for more information. 15.16. Limit Authentication Sessions When a login page is opened for the first time in a web browser, Red Hat build of Keycloak creates an object called authentication session that stores some useful information about the request. Whenever a new login page is opened from a different tab in the same browser, Red Hat build of Keycloak creates a new record called authentication sub-session that is stored within the authentication session. Authentication requests can come from any type of clients such as the Admin CLI. In that case, a new authentication session is also created with one authentication sub-session. Please note that authentication sessions can be created also in other ways than using a browser flow. The text below is applicable regardless of the source flow. Note This section describes deployments that use the Data Grid provider for authentication sessions. Authentication session is internally stored as RootAuthenticationSessionEntity . Each RootAuthenticationSessionEntity can have multiple authentication sub-sessions stored within the RootAuthenticationSessionEntity as a collection of AuthenticationSessionEntity objects. Red Hat build of Keycloak stores authentication sessions in a dedicated Data Grid cache. The number of AuthenticationSessionEntity per RootAuthenticationSessionEntity contributes to the size of each cache entry. Total memory footprint of authentication session cache is determined by the number of stored RootAuthenticationSessionEntity and by the number of AuthenticationSessionEntity within each RootAuthenticationSessionEntity . The number of maintained RootAuthenticationSessionEntity objects corresponds to the number of unfinished login flows from the browser. To keep the number of RootAuthenticationSessionEntity under control, using an advanced firewall control to limit ingress network traffic is recommended. Higher memory usage may occur for deployments where there are many active RootAuthenticationSessionEntity with a lot of AuthenticationSessionEntity . If the load balancer does not support or is not configured for session stickiness, the load over network in a cluster can increase significantly. The reason for this load is that each request that lands on a node that does not own the appropriate authentication session needs to retrieve and update the authentication session record in the owner node which involves a separate network transmission for both the retrieval and the storage. The maximum number of AuthenticationSessionEntity per RootAuthenticationSessionEntity can be configured in authenticationSessions SPI by setting property authSessionsLimit . The default value is set to 300 AuthenticationSessionEntity per a RootAuthenticationSessionEntity . When this limit is reached, the oldest authentication sub-session will be removed after a new authentication session request. The following example shows how to limit the number of active AuthenticationSessionEntity per a RootAuthenticationSessionEntity to 100. bin/kc.[sh|bat] start --spi-authentication-sessions-infinispan-auth-sessions-limit=100 15.17. SQL injection attacks Currently, Red Hat build of Keycloak has no known SQL injection vulnerabilities. | [
"kc.[sh|bat] start --spi-user-profile-declarative-user-profile-read-only-attributes=foo,bar*",
"bin/kc.[sh|bat] start --spi-authentication-sessions-infinispan-auth-sessions-limit=100"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/mitigating_security_threats |
Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] | Chapter 3. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 3.1. Specification Property Type Description aggregationRule object AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .aggregationRule Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 3.1.2. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.3. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/clusterroles DELETE : delete collection of ClusterRole GET : list or watch objects of kind ClusterRole POST : create a ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles GET : watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} GET : watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/rbac.authorization.k8s.io/v1/clusterroles Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterRole Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ClusterRole Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body ClusterRole schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ClusterRole. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/rbac.authorization.k8s.io/v1/clusterroles/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the ClusterRole Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterRole Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.17. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body ClusterRole schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty 3.2.4. /apis/rbac.authorization.k8s.io/v1/watch/clusterroles/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the ClusterRole Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ClusterRole. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/rbac_apis/clusterrole-rbac-authorization-k8s-io-v1 |
Chapter 1. Deployment overview | Chapter 1. Deployment overview AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This guide provides instructions for deploying and managing AMQ Streams. Deployment options and steps are covered using the example installation files included with AMQ Streams. While the guide highlights important configuration considerations, it does not cover all available options. For a deeper understanding of the Kafka component configuration options, refer to the AMQ Streams Custom Resource API Reference . In addition to deployment instructions, the guide offers pre- and post-deployment guidance. It covers setting up and securing client access to your Kafka cluster. Furthermore, it explores additional deployment options such as metrics integration, distributed tracing, and cluster management tools like Cruise Control and the AMQ Streams Drain Cleaner. You'll also find recommendations on managing AMQ Streams and fine-tuning Kafka configuration for optimal performance. Upgrade instructions are provided for both AMQ Streams and Kafka, to help keep your deployment up to date. AMQ Streams is designed to be compatible with all types of OpenShift clusters, irrespective of their distribution. Whether your deployment involves public or private clouds, or if you are setting up a local development environment, the instructions in this guide are applicable in all cases. 1.1. AMQ Streams custom resources Deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. These custom resources are created as instances of APIs added by Custom Resource Definitions (CRDs) to extend OpenShift resources. CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation. 1.1.1. AMQ Streams custom resource example CRDs require a one-time installation in a cluster to define the schemas used to instantiate and manage AMQ Streams-specific resources. After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. Depending on the cluster setup, installation typically requires cluster admin privileges. Note Access to manage custom resources is limited to AMQ Streams administrators. For more information, see Section 4.5, "Designating AMQ Streams administrators" . A CRD defines a new kind of resource, such as kind:Kafka , within an OpenShift cluster. The Kubernetes API server allows custom resources to be created based on the kind and understands from the CRD how to validate and store the custom resource when it is added to the OpenShift cluster. Warning When a CustomResourceDefinition is deleted, custom resources of that type are also deleted. Additionally, OpenShift resources created by the custom resource are also deleted, such as Deployment , Pod , Service and ConfigMap resources. Each AMQ Streams-specific custom resource conforms to the schema defined by the CRD for the resource's kind . The custom resources for AMQ Streams components have common configuration properties, which are defined under spec . To understand the relationship between a CRD and a custom resource, let's look at a sample of the CRD for a Kafka topic. Kafka topic CRD apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # ... singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # ... subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 # ... 1 The metadata for the topic CRD, its name and a label to identify the CRD. 2 The specification for this CRD, including the group (domain) name, the plural name and the supported schema version, which are used in the URL to access the API of the topic. The other names are used to identify instance resources in the CLI. For example, oc get kafkatopic my-topic or oc get kafkatopics . 3 The shortname can be used in CLI commands. For example, oc get kt can be used as an abbreviation instead of oc get kafkatopic . 4 The information presented when using a get command on the custom resource. 5 The current status of the CRD as described in the schema reference for the resource. 6 openAPIV3Schema validation provides validation for the creation of topic custom resources. For example, a topic requires at least one partition and one replica. Note You can identify the CRD YAML files supplied with the AMQ Streams installation files, because the file names contain an index number followed by 'Crd'. Here is a corresponding example of a KafkaTopic custom resource. Kafka topic custom resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: "2019-08-20T11:37:00.706Z" status: "True" type: Ready observedGeneration: 1 / ... 1 The kind and apiVersion identify the CRD of which the custom resource is an instance. 2 A label, applicable only to KafkaTopic and KafkaUser resources, that defines the name of the Kafka cluster (which is same as the name of the Kafka resource) to which a topic or user belongs. 3 The spec shows the number of partitions and replicas for the topic as well as the configuration parameters for the topic itself. In this example, the retention period for a message to remain in the topic and the segment file size for the log are specified. 4 Status conditions for the KafkaTopic resource. The type condition changed to Ready at the lastTransitionTime . Custom resources can be applied to a cluster through the platform CLI. When the custom resource is created, it uses the same validation as the built-in resources of the Kubernetes API. After a KafkaTopic custom resource is created, the Topic Operator is notified and corresponding Kafka topics are created in AMQ Streams. Additional resources Extend the Kubernetes API with CustomResourceDefinitions Example configuration files provided with AMQ Streams 1.2. AMQ Streams operators AMQ Streams operators are purpose-built with specialist operational knowledge to effectively manage Kafka on OpenShift. Each operator performs a distinct function. Cluster Operator The Cluster Operator handles the deployment and management of Apache Kafka clusters on OpenShift. It automates the setup of Kafka brokers, and other Kafka components and resources. Topic Operator The Topic Operator manages the creation, configuration, and deletion of topics within Kafka clusters. User Operator The User Operator manages Kafka users that require access to Kafka brokers. When you deploy AMQ Streams, you first deploy the Cluster Operator. The Cluster Operator is then ready to handle the deployment of Kafka. You can also deploy the Topic Operator and User Operator using the Cluster Operator (recommended) or as standalone operators. You would use a standalone operator with a Kafka cluster that is not managed by the Cluster Operator. The Topic Operator and User Operator are part of the Entity Operator. The Cluster Operator can deploy one or both operators based on the Entity Operator configuration. Important To deploy the standalone operators, you need to set environment variables to connect to a Kafka cluster. These environment variables do not need to be set if you are deploying the operators using the Cluster Operator as they will be set by the Cluster Operator. 1.2.1. Watching AMQ Streams resources in OpenShift namespaces Operators watch and manage AMQ Streams resources in OpenShift namespaces. The Cluster Operator can watch a single namespace, multiple namespaces, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator can watch a single namespace. The Cluster Operator watches for Kafka resources The Topic Operator watches for KafkaTopic resources The User Operator watches for KafkaUser resources The Topic Operator and the User Operator can only watch a single Kafka cluster in a namespace. And they can only be connected to a single Kafka cluster. If multiple Topic Operators watch the same namespace, name collisions and topic deletion can occur. This is because each Kafka cluster uses Kafka topics that have the same name (such as __consumer_offsets ). Make sure that only one Topic Operator watches a given namespace. When using multiple User Operators with a single namespace, a user with a given username can exist in more than one Kafka cluster. If you deploy the Topic Operator and User Operator using the Cluster Operator, they watch the Kafka cluster deployed by the Cluster Operator by default. You can also specify a namespace using watchedNamespace in the operator configuration. For a standalone deployment of each operator, you specify a namespace and connection to the Kafka cluster to watch in the configuration. 1.2.2. Managing RBAC resources The Cluster Operator creates and manages role-based access control (RBAC) resources for AMQ Streams components that need access to OpenShift resources. For the Cluster Operator to function, it needs permission within the OpenShift cluster to interact with Kafka resources, such as Kafka and KafkaConnect , as well as managed resources like ConfigMap , Pod , Deployment , and Service . Permission is specified through the following OpenShift RBAC resources: ServiceAccount Role and ClusterRole RoleBinding and ClusterRoleBinding 1.2.2.1. Delegating privileges to AMQ Streams components The Cluster Operator runs under a service account called strimzi-cluster-operator . It is assigned cluster roles that give it permission to create the RBAC resources for AMQ Streams components. Role bindings associate the cluster roles with the service account. OpenShift prevents components operating under one ServiceAccount from granting another ServiceAccount privileges that the granting ServiceAccount does not have. Because the Cluster Operator creates the RoleBinding and ClusterRoleBinding RBAC resources needed by the resources it manages, it requires a role that gives it the same privileges. The following tables describe the RBAC resources created by the Cluster Operator. Table 1.1. ServiceAccount resources Name Used by <cluster_name> -kafka Kafka broker pods <cluster_name> -zookeeper ZooKeeper pods <cluster_name> -cluster-connect Kafka Connect pods <cluster_name> -mirror-maker MirrorMaker pods <cluster_name> -mirrormaker2 MirrorMaker 2 pods <cluster_name> -bridge Kafka Bridge pods <cluster_name> -entity-operator Entity Operator Table 1.2. ClusterRole resources Name Used by strimzi-cluster-operator-namespaced Cluster Operator strimzi-cluster-operator-global Cluster Operator strimzi-cluster-operator-leader-election Cluster Operator strimzi-kafka-broker Cluster Operator, rack feature (when used) strimzi-entity-operator Cluster Operator, Topic Operator, User Operator strimzi-kafka-client Cluster Operator, Kafka clients for rack awareness Table 1.3. ClusterRoleBinding resources Name Used by strimzi-cluster-operator Cluster Operator strimzi-cluster-operator-kafka-broker-delegation Cluster Operator, Kafka brokers for rack awareness strimzi-cluster-operator-kafka-client-delegation Cluster Operator, Kafka clients for rack awareness Table 1.4. RoleBinding resources Name Used by strimzi-cluster-operator Cluster Operator strimzi-cluster-operator-kafka-broker-delegation Cluster Operator, Kafka brokers for rack awareness 1.2.2.2. Running the Cluster Operator using a ServiceAccount The Cluster Operator is best run using a ServiceAccount . Example ServiceAccount for the Cluster Operator apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi The Deployment of the operator then needs to specify this in its spec.template.spec.serviceAccountName . Partial example of Deployment for the Cluster Operator apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: metadata: labels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator spec: serviceAccountName: strimzi-cluster-operator # ... 1.2.2.3. ClusterRole resources The Cluster Operator uses ClusterRole resources to provide the necessary access to resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the cluster roles. Note Cluster administrator rights are only needed for the creation of ClusterRole resources. The Cluster Operator will not run under a cluster admin account. ClusterRole resources follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate the cluster of the Kafka component. The first set of assigned privileges allow the Cluster Operator to manage OpenShift resources such as Deployment , Pod , and ConfigMap . All cluster roles are required by the Cluster Operator in order to delegate privileges. The Cluster Operator uses the strimzi-cluster-operator-namespaced and strimzi-cluster-operator-global cluster roles to grant permission at the namespace-scoped resources level and cluster-scoped resources level. ClusterRole with namespaced resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: # Resources in this role are used by the operator based on an operand being deployed in some namespace. When needed, you # can deploy the operator as a cluster-wide operator. But grant the rights listed in this role only on the namespaces # where the operands will be deployed. That way, you can limit the access the operator has to other namespaces where it # does not manage any clusters. - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage roles to grant the entity operator permissions - roles verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services and endpoints to expose Strimzi components to network traffic - services - endpoints # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "apps" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" # legacy core events api, used by topic operator - "events.k8s.io" # new events api, used by cluster operator resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # Kafka Connect Build on OpenShift requirement - build.openshift.io resources: - buildconfigs - buildconfigs/instantiate - builds verbs: - get - list - watch - create - delete - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - watch - create - delete - patch - update - apiGroups: - image.openshift.io resources: # The cluster operator needs to verify the image stream when used for Kafka Connect image build - imagestreams verbs: - get - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update ClusterRole with cluster-scoped resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - storage.k8s.io resources: # The cluster operator requires "get" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - "" resources: # The cluster operator requires "list" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list The strimzi-cluster-operator-leader-election cluster role represents the permissions needed for the leader election. ClusterRole with leader election permissions apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election # The "create" verb cannot be used with "resourceNames" - leases verbs: - create - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election - leases resourceNames: # The default RBAC files give the operator only access to the Lease resource names strimzi-cluster-operator # If you want to use another resource name or resource namespace, you have to configure the RBAC resources accordingly - strimzi-cluster-operator verbs: - get - list - watch - delete - patch - update The strimzi-kafka-broker cluster role represents the access needed by the init container in Kafka pods that use rack awareness. A role binding named strimzi- <cluster_name> -kafka-init grants the <cluster_name> -kafka service account access to nodes within a cluster using the strimzi-kafka-broker role. If the rack feature is not used and the cluster is not exposed through nodeport , no binding is created. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka broker pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka Brokers require "get" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get The strimzi-entity-operator cluster role represents the access needed by the Topic Operator and User Operator. The Topic Operator produces OpenShift events with status information, so the <cluster_name> -entity-operator service account is bound to the strimzi-entity-operator role, which grants this access via the strimzi-entity-operator role binding. ClusterRole for the Cluster Operator allowing it to delegate access to events to the Topic and User Operators apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - "kafka.strimzi.io" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - "" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - "" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - watch - create - delete - patch - update The strimzi-kafka-client cluster role represents the access needed by Kafka clients that use rack awareness. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka client-based pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require "get" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get 1.2.2.4. ClusterRoleBinding resources The Cluster Operator uses ClusterRoleBinding and RoleBinding resources to associate its ClusterRole with its ServiceAccount : Cluster role bindings are required by cluster roles containing cluster-scoped resources. Example ClusterRoleBinding for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io Cluster role bindings are also needed for the cluster roles used in delegating privileges: Example ClusterRoleBinding for the Cluster Operator and Kafka broker rack awareness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi # The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io Example ClusterRoleBinding for the Cluster Operator and Kafka client rack awareness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi # The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the # cluster role to the Kafka clients using it for consuming from closest replica. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io Cluster roles containing only namespaced resources are bound using role bindings only. Example RoleBinding for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io Example RoleBinding for the Cluster Operator and Kafka broker rack awareness apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi # The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io 1.3. Using the Kafka Bridge to connect with a Kafka cluster You can use the AMQ Streams Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. Additional resources For information on installing and using the Kafka Bridge, see Using the AMQ Streams Kafka Bridge . 1.4. Seamless FIPS support Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. When running AMQ Streams on a FIPS-enabled OpenShift cluster, the OpenJDK used in AMQ Streams container images automatically switches to FIPS mode. From version 2.4, AMQ Streams can run on FIPS-enabled OpenShift clusters without any changes or special configuration. It uses only the FIPS-compliant security libraries from the OpenJDK. Minimum password length When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. From AMQ Streams 2.4, the default password length in AMQ Streams User Operator is set to 32 characters as well. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. You can do that, for example, by deleting the user secret and waiting for the User Operator to create a new password with the appropriate length. Important If you are using FIPS-enabled OpenShift clusters, you may experience higher memory consumption compared to regular OpenShift clusters. To avoid any issues, we suggest increasing the memory request to at least 512Mi. Additional resources Disabling FIPS mode using Cluster Operator configuration What are Federal Information Processing Standards (FIPS) 1.5. Document Conventions User-replaced values User-replaced values, also known as replaceables , are shown in with angle brackets (< >). Underscores ( _ ) are used for multi-word values. If the value refers to code or commands, monospace is also used. For example, the following code shows that <my_namespace> must be replaced by the correct namespace name: 1.6. Additional resources AMQ Streams Overview AMQ Streams Custom Resource API Reference Using the AMQ Streams Kafka Bridge | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: CustomResourceDefinition metadata: 1 name: kafkatopics.kafka.strimzi.io labels: app: strimzi spec: 2 group: kafka.strimzi.io versions: v1beta2 scope: Namespaced names: # singular: kafkatopic plural: kafkatopics shortNames: - kt 3 additionalPrinterColumns: 4 # subresources: status: {} 5 validation: 6 openAPIV3Schema: properties: spec: type: object properties: partitions: type: integer minimum: 1 replicas: type: integer minimum: 1 maximum: 32767 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic 1 metadata: name: my-topic labels: strimzi.io/cluster: my-cluster 2 spec: 3 partitions: 1 replicas: 1 config: retention.ms: 7200000 segment.bytes: 1073741824 status: conditions: 4 lastTransitionTime: \"2019-08-20T11:37:00.706Z\" status: \"True\" type: Ready observedGeneration: 1 /",
"apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: metadata: labels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator spec: serviceAccountName: strimzi-cluster-operator #",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: # Resources in this role are used by the operator based on an operand being deployed in some namespace. When needed, you # can deploy the operator as a cluster-wide operator. But grant the rights listed in this role only on the namespaces # where the operands will be deployed. That way, you can limit the access the operator has to other namespaces where it # does not manage any clusters. - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to access and manage roles to grant the entity operator permissions - roles verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services and endpoints to expose Strimzi components to network traffic - services - endpoints # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"apps\" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" # legacy core events api, used by topic operator - \"events.k8s.io\" # new events api, used by cluster operator resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # Kafka Connect Build on OpenShift requirement - build.openshift.io resources: - buildconfigs - buildconfigs/instantiate - builds verbs: - get - list - watch - create - delete - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - watch - create - delete - patch - update - apiGroups: - image.openshift.io resources: # The cluster operator needs to verify the image stream when used for Kafka Connect image build - imagestreams verbs: - get - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - storage.k8s.io resources: # The cluster operator requires \"get\" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - \"\" resources: # The cluster operator requires \"list\" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election # The \"create\" verb cannot be used with \"resourceNames\" - leases verbs: - create - apiGroups: - coordination.k8s.io resources: # The cluster operator needs to access and manage leases for leader election - leases resourceNames: # The default RBAC files give the operator only access to the Lease resource names strimzi-cluster-operator # If you want to use another resource name or resource namespace, you have to configure the RBAC resources accordingly - strimzi-cluster-operator verbs: - get - list - watch - delete - patch - update",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka Brokers require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - \"kafka.strimzi.io\" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - \"\" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - \"\" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - watch - create - delete - patch - update",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka clients using it for consuming from closest replica. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io",
"sed -i 's/namespace: .*/namespace: <my_namespace>' install/cluster-operator/*RoleBinding*.yaml"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/deploy-intro_str |
Chapter 3. ProjectRequest [project.openshift.io/v1] | Chapter 3. ProjectRequest [project.openshift.io/v1] Description ProjectRequest is the set of options necessary to fully qualify a project request Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string Description is the description to apply to a project displayName string DisplayName is the display name to apply to a project kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 3.2. API endpoints The following API endpoints are available: /apis/project.openshift.io/v1/projectrequests GET : list objects of kind ProjectRequest POST : create a ProjectRequest 3.2.1. /apis/project.openshift.io/v1/projectrequests HTTP method GET Description list objects of kind ProjectRequest Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method POST Description create a ProjectRequest Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ProjectRequest schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ProjectRequest schema 201 - Created ProjectRequest schema 202 - Accepted ProjectRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/project_apis/projectrequest-project-openshift-io-v1 |
Chapter 4. Installing Hosts for Red Hat Virtualization | Chapter 4. Installing Hosts for Red Hat Virtualization Red Hat Virtualization supports two types of hosts: Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts . Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability. See Recommended practices for configuring host networks for networking information. Important SELinux is in enforcing mode upon installation. To verify, run getenforce . SELinux must be in enforcing mode on all hosts and Managers for your Red Hat Virtualization environment to be supported. Table 4.1. Host Types Host Type Other Names Description Red Hat Virtualization Host RHVH, thin host This is a minimal operating system based on Red Hat Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. Red Hat Enterprise Linux host RHEL host, thick host Red Hat Enterprise Linux systems with the appropriate subscriptions attached can be used as hosts. Host Compatibility When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh Red Hat Virtualization installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see Red Hat Virtualization Manager Compatibility in Red Hat Virtualization Life Cycle . 4.1. Red Hat Virtualization Hosts 4.1.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See Running Cockpit for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Procedure Go to the Get Started with Red Hat Virtualization on the Red Hat Customer Portal and log in. Click Download Latest to access the product download page. Choose the appropriate Hypervisor Image for RHV from the list and click Download Now . Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.4 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a keyboard layout from the Keyboard Layout screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Use the Automatically configure partitioning option. Select a time zone from the Time & Date screen and click Done . Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Red Hat Enterprise Linux 8 Installation Guide . Enter a host name in the Host Name field, and click Done . Optional: Configure Security Policy and Kdump . See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for Red Hat Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Do not create untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. Note If necessary, you can prevent kernel modules from loading automatically . 4.1.2. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Enable the Red Hat Virtualization Host 8 repository to allow later updates to the Red Hat Virtualization Host: # subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: Note You can also configure virtual machine subscriptions in Red Hat Satellite using virt-who. See Using virt-who to manage host-based subscriptions . 4.1.3. Advanced Installation 4.1.3.1. Custom Partitioning Custom partitioning on Red Hat Virtualization Host (RHVH) is not recommended. Use the Automatically configure partitioning option in the Installation Destination window. If your installation requires custom partitioning, select the I will configure partitioning option during the installation, and note that the following restrictions apply: Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window. The following directories are required and must be on thin provisioned logical volumes: root ( / ) /home /tmp /var /var/crash /var/log /var/log/audit Important Do not create a separate partition for /usr . Doing so will cause the installation to fail. /usr must be on a logical volume that is able to change versions along with RHVH, and therefore should be left on root ( / ). For information about the required storage sizes for each partition, see Storage Requirements . The /boot directory should be defined as a standard partition. The /var directory must be on a separate volume or disk. Only XFS or Ext4 file systems are supported. Configuring Manual Partitioning in a Kickstart File The following example demonstrates how to configure manual partitioning in a Kickstart file. clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1000 Note If you use logvol --thinpool --grow , you must also include volgroup --reserved-space or volgroup --reserved-percent to reserve space in the volume group for the thin pool to grow. 4.1.3.2. Installing a DUD driver on a host without installer support There are times when installing Red Hat Virtualization Host (RHVH) requires a Driver Update Disk (DUD), such as when using a hardware RAID device that is not supported by the default configuration of RHVH. In contrast with Red Hat Enterprise Linux hosts, RHVH does not fully support using a DUD. Subsequently the host fails to boot normally after installation because it does not see RAID. Instead it boots into emergency mode. Example output: In such a case you can manually add the drivers before finishing the installation. Prerequisites A machine onto which you are installing RHVH. A DUD. If you are using a USB drive for the DUD and RHVH, you must have at least two available USB ports. Procedure Load the DUD on the host machine. Install RHVH. See Installing Red Hat Virtualization Hosts in Installing Red Hat Virtualization as a self-hosted engine using the command line . Important When installation completes, do not reboot the system. Tip If you want to access the DUD using SSH, do the following: Add the string inst.sshd to the kernel command line: Enable networking during the installation. Enter the console mode, by pressing Ctrl + Alt + F3 . Alternatively you can connect to it using SSH. Mount the DUD: Copy the RPM file inside the DUD to the target machine's disk: For example: Change the root directory to /mnt/sysroot : Back up the current initrd images. For example: Install the RPM file for the driver from the copy you made earlier. For example: Note This package is not visible on the system after you reboot into the installed environment, so if you need it, for example, to rebuild the initramfs , you need to install that package once again, after which the package remains. If you update the host using dnf , the driver update persists, so you do not need to repeat this process. Tip If you do not have an internet connection, use the rpm command instead of dnf : Create a new image, forcefully adding the driver: For example: Check the results. The new image should be larger, and include the driver. For example, compare the sizes of the original, backed-up image file and the new image file. In this example, the new image file is 88739013 bytes, larger than the original 88717417 bytes: The new drivers should be part of the image file. For example, the 3w-9xxx module should be included: Copy the image to the the directory under /boot that contains the kernel to be used in the layer being installed, for example: Exit chroot. Exit the shell. If you used Ctrl + Alt + F3 to access a virtual terminal, then move back to the installer by pressing Ctrl + Alt + F_<n>_ , usually F1 or F5 At the installer screen, reboot. Verification The machine should reboot successfully. 4.1.3.3. Automating Red Hat Virtualization Host deployment You can install Red Hat Virtualization Host (RHVH) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. General instructions for installing from a PXE server with a Kickstart file are available in the Red Hat Enterprise Linux Installation Guide , as RHVH is installed in much the same way as Red Hat Enterprise Linux. RHVH-specific instructions, with examples for deploying RHVH with Red Hat Satellite, are described below. The automated RHVH deployment has 3 stages: Preparing the Installation Environment Configuring the PXE Server and the Boot Loader Creating and Running a Kickstart File 4.1.3.3.1. Preparing the installation environment Go to the Get Started with Red Hat Virtualization on the Red Hat Customer Portal and log in. Click Download Latest to access the product download page. Choose the appropriate Hypervisor Image for RHV from the list and click Download Now . Make the RHVH ISO image available over the network. See Installation Source on a Network in the Red Hat Enterprise Linux Installation Guide . Extract the squashfs.img hypervisor image file from the RHVH ISO: # mount -o loop /path/to/RHVH-ISO /mnt/rhvh # cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp # cd /tmp # rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv Note This squashfs.img file, located in the /tmp/usr/share/redhat-virtualization-host/image/ directory, is called redhat-virtualization-host- version_number _version.squashfs.img . It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anaconda inst.stage2 option. 4.1.3.3.2. Configuring the PXE server and the boot loader Configure the PXE server. See Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide . Copy the RHVH boot images to the /tftpboot directory: # cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/ Create a rhvh label specifying the RHVH boot images in the boot loader configuration: LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO RHVH Boot loader configuration example for Red Hat Satellite If you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called rhvh_image and populate it with the directory URL where the ISO is mounted or extracted: <%# kind: PXELinux name: RHVH PXELinux %> # Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url("provision") %> inst.stage2=<%= @host.params["rhvh_image"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url("built") %> IPAPPEND 2 Make the content of the RHVH ISO locally available and export it to the network, for example, using an HTTPD server: # cp -a /mnt/rhvh/ /var/www/html/rhvh-install # curl URL/to/RHVH-ISO /rhvh-install 4.1.3.3.3. Creating and running a Kickstart file Create a Kickstart file and make it available over the network. See Kickstart Installations in the Red Hat Enterprise Linux Installation Guide . Ensure that the Kickstart file meets the following RHV-specific requirements: The %packages section is not required for RHVH. Instead, use the liveimg option and specify the redhat-virtualization-host- version_number _version.squashfs.img file from the RHVH ISO image: liveimg --url= example.com /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img Autopartitioning is highly recommended, but use caution: ensure that the local disk is detected first, include the ignoredisk command, and specify the local disk to ignore, such as sda . To ensure that a particular drive is used, Red Hat recommends using ignoredisk --only-use=/dev/disk/< path > or ignoredisk --only-use=/dev/disk/< ID > : autopart --type=thinp ignoredisk --only-use=sda ignoredisk --only-use=/dev/disk/< path > ignoredisk --only-use=/dev/disk/< ID > Note Autopartitioning requires thin provisioning. The --no-home option does not work in RHVH because /home is a required directory. If your installation requires manual partitioning, see Custom Partitioning for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file. A %post section that calls the nodectl init command is required: %post nodectl init %end Note Ensure that the nodectl init command is at the very end of the %post section but before the reboot code, if any. Kickstart example for deploying RHVH on its own This Kickstart example shows you how to deploy RHVH. You can include additional commands and options as required. Warning This example assumes that all disks are empty and can be initialized. If you have attached disks with data, either remove them or add them to the ignoredisks property. liveimg --url=http:// FQDN /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end Kickstart example for deploying RHVH with registration and network configuration from Satellite This Kickstart example uses information from Red Hat Satellite to configure the host network and register the host to the Satellite server. You must create a global or host group level parameter called rhvh_image and populate it with the directory URL to the squashfs.img file. ntp_server1 is also a global or host group level variable. Warning This example assumes that all disks are empty and can be initialized. If you have attached disks with data, either remove them or add them to the ignoredisks property. <%# kind: provision name: RHVH Kickstart default oses: - RHVH %> install liveimg --url=<%= @host.params['rhvh_image'] %>squashfs.img network --bootproto static --ip=<%= @host.ip %> --netmask=<%= @host.subnet.mask %> --gateway=<%= @host.subnet.gateway %> --nameserver=<%= @host.subnet.dns_primary %> --hostname <%= @host.name %> zerombr clearpart --all autopart --type=thinp rootpw --iscrypted <%= root_pass %> # installation answers lang en_US.UTF-8 timezone <%= @host.params['time-zone'] || 'UTC' %> keyboard us firewall --service=ssh services --enabled=sshd text reboot %post --log=/root/ks.post.log --erroronfail nodectl init <%= snippet 'subscription_manager_registration' %> <%= snippet 'kickstart_networking_setup' %> /usr/sbin/ntpdate -sub <%= @host.params['ntp_server1'] || '0.fedora.pool.ntp.org' %> /usr/sbin/hwclock --systohc /usr/bin/curl <%= foreman_url('built') %> sync systemctl reboot %end Add the Kickstart file location to the boot loader configuration file on the PXE server: APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO inst.ks= URL/to/RHVH-ks .cfg Install RHVH following the instructions in Booting from the Network Using PXE in the Red Hat Enterprise Linux Installation Guide . 4.2. Red Hat Enterprise Linux hosts 4.2.1. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 8 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard RHEL installation . The host must meet the minimum host requirements . Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Do not install third-party watchdogs on Red Hat Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. 4.2.2. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: # subscription-manager list --available Use the pool IDs to attach the subscriptions to the system: # subscription-manager attach --pool= poolid Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=advanced-virt-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms For Red Hat Enterprise Linux 8 hosts, little endian, on IBM POWER8 or IBM POWER9 hardware: # subscription-manager repos \ --disable='*' \ --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms \ --enable=rhv-4-tools-for-rhel-8-ppc64le-rpms \ --enable=advanced-virt-for-rhel-8-ppc64le-rpms \ --enable=rhel-8-for-ppc64le-appstream-rpms \ --enable=rhel-8-for-ppc64le-baseos-rpms \ --enable=fast-datapath-for-rhel-8-ppc64le-rpms \ Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Ensure that all packages currently installed are up to date: # dnf upgrade --nobest Reboot the machine. Note If necessary, you can prevent kernel modules from loading automatically . 4.2.3. Installing Cockpit on Red Hat Enterprise Linux hosts You can install Cockpit for monitoring the host's resources and performing administrative tasks. Procedure Install the dashboard packages: # dnf install cockpit-ovirt-dashboard Enable and start the cockpit.socket service: # systemctl enable cockpit.socket # systemctl start cockpit.socket Check if Cockpit is an active service in the firewall: # firewall-cmd --list-services You should see cockpit listed. If it is not, enter the following with root permissions to add cockpit as a service to your firewall: # firewall-cmd --permanent --add-service=cockpit The --permanent option keeps the cockpit service active after rebooting. You can log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . 4.3. Recommended Practices for Configuring Host Networks Important Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate) . If your network environment is complex, you may need to configure a host network manually before adding the host to the Red Hat Virtualization Manager. Consider the following practices for configuring a host network: Configure the network with Cockpit. Alternatively, you can use nmtui or nmcli . If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster . Use the following naming conventions: VLAN devices: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD VLAN interfaces: physical_device . VLAN_ID (for example, eth0.23 , eth1.128 , enp3s0.50 ) Bond interfaces: bond number (for example, bond0 , bond1 ) VLANs on bond interfaces: bond number . VLAN_ID (for example, bond0.50 , bond1.128 ) Use network bonding . Network teaming is not supported in Red Hat Virtualization and will cause errors if the host is used to deploy a self-hosted engine or added to the Manager. Use recommended bonding modes: If the ovirtmgmt network is not used by virtual machines, the network may use any supported bonding mode. If the ovirtmgmt network is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation . If your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1) Active-Backup . See Bonding Modes for details. Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you can use any tool): # nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ipv4.gateway 123.123 .0.254 Configure a VLAN on a bond as in the following example (although nmcli is used, you can use any tool): # nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore # nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond # nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond # nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ipv4.gateway 123.123 .0.254 Do not disable firewalld . Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules . 4.4. Adding Standard Hosts to the Red Hat Virtualization Manager Important Always use the RHV Manager to modify the network configuration of hosts in your clusters. Otherwise, you might create an unsupported configuration. For details, see Network Manager Stateful Configuration (nmstate) . Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up . | [
"subscription-manager repos --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhvh-4-for-rhel-8-x86_64-rpms",
"clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000",
"Warning: /dev/test/rhvh-4.4-20210202.0+1 does not exist Warning: /dev/test/swap does not exist Entering emergency mode. Exit the shell to continue.",
"< kernel_command_line > inst.sshd",
"mkdir /mnt/dud mount -r /dev/ <dud_device> /mnt/dud",
"cp /mnt/dud/rpms/ <path> / <rpm_file> .rpm /mnt/sysroot/root/",
"cp /mnt/dud/rpms/x86_64/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm /mnt/sysroot/root/",
"chroot /mnt/sysroot",
"cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1 cp -p /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1",
"dnf install /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm",
"rpm -ivh /root/kmod-3w-9xxx-2.26.02.014-5.el8_3.elrepo.x86_64.rpm",
"dracut --force --add-drivers <module_name> --kver <kernel_version>",
"dracut --force --add-drivers 3w-9xxx --kver 4.18.0-240.15.1.el8_3.x86_64",
"ls -ltr /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img* -rw-------. 1 root root 88717417 Jun 2 14:29 /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img.bck1 -rw-------. 1 root root 88739013 Jun 2 17:47 /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img",
"lsinitrd /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img | grep 3w-9xxx drwxr-xr-x 2 root root 0 Feb 22 15:57 usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/weak-updates/3w-9xxx lrwxrwxrwx 1 root root 55 Feb 22 15:57 usr/lib/modules/4.18.0-240.15.1.el8_3.x86_64/weak-updates/3w-9xxx/3w-9xxx.ko-../../../4.18.0-240.el8.x86_64/extra/3w-9xxx/3w-9xxx.ko drwxr-xr-x 2 root root 0 Feb 22 15:57 usr/lib/modules/4.18.0-240.el8.x86_64/extra/3w-9xxx -rw-r--r-- 1 root root 80121 Nov 10 2020 usr/lib/modules/4.18.0-240.el8.x86_64/extra/3w-9xxx/3w-9xxx.ko",
"cp -p /boot/initramfs-4.18.0-240.15.1.el8_3.x86_64.img /boot/rhvh-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img",
"mount -o loop /path/to/RHVH-ISO /mnt/rhvh cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp cd /tmp rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv",
"cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/",
"LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO",
"<%# kind: PXELinux name: RHVH PXELinux %> Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url(\"provision\") %> inst.stage2=<%= @host.params[\"rhvh_image\"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url(\"built\") %> IPAPPEND 2",
"cp -a /mnt/rhvh/ /var/www/html/rhvh-install curl URL/to/RHVH-ISO /rhvh-install",
"liveimg --url= example.com /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img",
"autopart --type=thinp ignoredisk --only-use=sda ignoredisk --only-use=/dev/disk/< path > ignoredisk --only-use=/dev/disk/< ID >",
"%post nodectl init %end",
"liveimg --url=http:// FQDN /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end",
"<%# kind: provision name: RHVH Kickstart default oses: - RHVH %> install liveimg --url=<%= @host.params['rhvh_image'] %>squashfs.img network --bootproto static --ip=<%= @host.ip %> --netmask=<%= @host.subnet.mask %> --gateway=<%= @host.subnet.gateway %> --nameserver=<%= @host.subnet.dns_primary %> --hostname <%= @host.name %> zerombr clearpart --all autopart --type=thinp rootpw --iscrypted <%= root_pass %> installation answers lang en_US.UTF-8 timezone <%= @host.params['time-zone'] || 'UTC' %> keyboard us firewall --service=ssh services --enabled=sshd text reboot %post --log=/root/ks.post.log --erroronfail nodectl init <%= snippet 'subscription_manager_registration' %> <%= snippet 'kickstart_networking_setup' %> /usr/sbin/ntpdate -sub <%= @host.params['ntp_server1'] || '0.fedora.pool.ntp.org' %> /usr/sbin/hwclock --systohc /usr/bin/curl <%= foreman_url('built') %> sync systemctl reboot %end",
"APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO inst.ks= URL/to/RHVH-ks .cfg",
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= poolid",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=advanced-virt-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager repos --disable='*' --enable=rhv-4-mgmt-agent-for-rhel-8-ppc64le-rpms --enable=rhv-4-tools-for-rhel-8-ppc64le-rpms --enable=advanced-virt-for-rhel-8-ppc64le-rpms --enable=rhel-8-for-ppc64le-appstream-rpms --enable=rhel-8-for-ppc64le-baseos-rpms --enable=fast-datapath-for-rhel-8-ppc64le-rpms \\",
"subscription-manager release --set=8.6",
"dnf upgrade --nobest",
"dnf install cockpit-ovirt-dashboard",
"systemctl enable cockpit.socket systemctl start cockpit.socket",
"firewall-cmd --list-services",
"firewall-cmd --permanent --add-service=cockpit",
"nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ipv4.gateway 123.123 .0.254",
"nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ipv4.gateway 123.123 .0.254"
]
| https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/installing_hosts_for_rhv_sm_remotedb_deploy |
Serverless | Serverless OpenShift Container Platform 4.7 OpenShift Serverless installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"oc delete mutatingwebhookconfiguration kafkabindings.webhook.kafka.sources.knative.dev",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: deployments: - name: activator resources: - container: activator requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain-name> namespace: knative-eventing spec: ref: name: broker-ingress kind: Service apiVersion: v1",
"kn event send --to-url https://ce-api.foo.example.com/",
"kn event send --to Service:serving.knative.dev/v1:event-display",
"[analyzer] no stack metadata found at path '' [analyzer] ERROR: failed to : set API for buildpack 'paketo-buildpacks/[email protected]': buildpack API version '0.7' is incompatible with the lifecycle",
"Error: failed to get credentials: failed to verify credentials: status code: 404",
"buildEnvs: []",
"buildEnvs: - name: BP_NODE_RUN_SCRIPTS value: build",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"ERROR: failed to image: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.40/info\": EOF",
"ExecStart=/usr/bin/podman USDLOGGING system service --time=0",
"systemctl --user daemon-reload",
"systemctl restart --user podman.socket",
"podman system service --time=0 tcp:127.0.0.1:5534 & export DOCKER_HOST=tcp://127.0.0.1:5534",
"spec: config: network: defaultExternalScheme: \"http\"",
"spec: config: network: defaultExternalScheme: \"https\"",
"spec: ingress: kourier: service-type: LoadBalancer",
"spec: config: network: defaultExternalScheme: \"http\"",
"WARNING: found multiple channel heads: [amqstreams.v1.7.2 amqstreams.v1.6.2], please check the `replaces`/`skipRange` fields of the operator bundles.",
"2021-05-02T12:56:17.700398Z warning envoy config [external/envoy/source/common/config/grpc_subscription_impl.cc:101] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener(s) 0.0.0.0_8081: duplicate listener 0.0.0.0_8081 found",
"oc delete services -n istio-system knative-local-gateway",
"apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default spec: channelTemplate: apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving",
"oc apply -f serving.yaml",
"oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s",
"oc get pods -n knative-serving-ingress",
"NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing",
"oc apply -f eventing.yaml",
"oc get knativeeventing.operator.knative.dev/knative-eventing -n knative-eventing --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"InstallSucceeded=True Ready=True",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE broker-controller-58765d9d49-g9zp6 1/1 Running 0 7m21s eventing-controller-65fdd66b54-jw7bh 1/1 Running 0 7m31s eventing-webhook-57fd74b5bd-kvhlz 1/1 Running 0 7m31s imc-controller-5b75d458fc-ptvm2 1/1 Running 0 7m19s imc-dispatcher-64f6d5fccb-kkc4c 1/1 Running 0 7m18s",
"oc delete knativeservings.operator.knative.dev knative-serving -n knative-serving",
"oc delete namespace knative-serving",
"oc delete knativeeventings.operator.knative.dev knative-eventing -n knative-eventing",
"oc delete namespace knative-eventing",
"oc get subscription jaeger -n openshift-operators -o yaml | grep currentCSV",
"currentCSV: jaeger-operator.v1.8.2",
"oc delete subscription jaeger -n openshift-operators",
"subscription.operators.coreos.com \"jaeger\" deleted",
"oc delete clusterserviceversion jaeger-operator.v1.8.2 -n openshift-operators",
"clusterserviceversion.operators.coreos.com \"jaeger-operator.v1.8.2\" deleted",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"oc get crd -oname | grep 'knative.dev' | xargs oc delete",
"kn: No such file or directory",
"tar -xf <file>",
"echo USDPATH",
"oc get ConsoleCLIDownload",
"NAME DISPLAY NAME AGE kn kn - OpenShift Serverless Command Line Interface (CLI) 2022-09-20T08:41:18Z oc-cli-downloads oc - OpenShift Command Line Interface (CLI) 2022-09-20T08:00:20Z",
"oc get route -n openshift-serverless",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kn kn-openshift-serverless.apps.example.com knative-openshift-metrics-3 http-cli edge/Redirect None",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager attach --pool=<pool_id> 1",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-ppc64le-rpms\"",
"yum install openshift-serverless-clients",
"kn: No such file or directory",
"tar -xf <filename>",
"echo USDPATH",
"echo USDPATH",
"C:\\> path",
"plugins: path-lookup: true 1 directory: ~/.config/kn/plugins 2 eventing: sink-mappings: 3 - prefix: svc 4 group: core 5 version: v1 6 resource: services 7",
"kn event build --field <field-name>=<value> --type <type-name> --id <id> --output <format>",
"kn event build -o yaml",
"data: {} datacontenttype: application/json id: 81a402a2-9c29-4c27-b8ed-246a253c9e58 source: kn-event/v0.4.0 specversion: \"1.0\" time: \"2021-10-15T10:42:57.713226203Z\" type: dev.knative.cli.plugin.event.generic",
"kn event build --field operation.type=local-wire-transfer --field operation.amount=2345.40 --field operation.from=87656231 --field operation.to=2344121 --field automated=true --field signature='FGzCPLvYWdEgsdpb3qXkaVp7Da0=' --type org.example.bank.bar --id USD(head -c 10 < /dev/urandom | base64 -w 0) --output json",
"{ \"specversion\": \"1.0\", \"id\": \"RjtL8UH66X+UJg==\", \"source\": \"kn-event/v0.4.0\", \"type\": \"org.example.bank.bar\", \"datacontenttype\": \"application/json\", \"time\": \"2021-10-15T10:43:23.113187943Z\", \"data\": { \"automated\": true, \"operation\": { \"amount\": \"2345.40\", \"from\": 87656231, \"to\": 2344121, \"type\": \"local-wire-transfer\" }, \"signature\": \"FGzCPLvYWdEgsdpb3qXkaVp7Da0=\" } }",
"kn event send --field <field-name>=<value> --type <type-name> --id <id> --to-url <url> --to <cluster-resource> --namespace <namespace>",
"kn event send --field player.id=6354aa60-ddb1-452e-8c13-24893667de20 --field player.game=2345 --field points=456 --type org.example.gaming.foo --to-url http://ce-api.foo.example.com/",
"kn event send --type org.example.kn.ping --id USD(uuidgen) --field event.type=test --field event.data=98765 --to Service:serving.knative.dev/v1:event-display",
"kn service create <service-name> --image <image> --tag <tag-value>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing",
"kn service update <service_name> --env <key>=<value>",
"kn service update <service_name> --port 80",
"kn service update <service_name> --request cpu=500m --limit memory=1024Mi --limit cpu=1000m",
"kn service update <service_name> --tag <revision_name>=latest",
"kn service update <service_name> --untag testing --tag @latest=staging",
"kn service update <service_name> --tag <revision_name>=test --traffic test=10,@latest=90",
"kn service apply <service_name> --image <image>",
"kn service apply <service_name> --image <image> --env <key>=<value>",
"kn service apply <service_name> -f <filename>",
"kn service describe --verbose <service_name>",
"Name: hello Namespace: default Age: 2m URL: http://hello-default.apps.ocp.example.com Revisions: 100% @latest (hello-00001) [1] (2m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Conditions: OK TYPE AGE REASON ++ Ready 1m ++ ConfigurationsReady 1m ++ RoutesReady 1m",
"Name: hello Namespace: default Annotations: serving.knative.dev/creator=system:admin serving.knative.dev/lastModifier=system:admin Age: 3m URL: http://hello-default.apps.ocp.example.com Cluster: http://hello.default.svc.cluster.local Revisions: 100% @latest (hello-00001) [1] (3m) Image: docker.io/openshift/hello-openshift (pinned to aaea76) Env: RESPONSE=Hello Serverless! Conditions: OK TYPE AGE REASON ++ Ready 3m ++ ConfigurationsReady 3m ++ RoutesReady 3m",
"kn service describe <service_name> -o yaml",
"kn service describe <service_name> -o json",
"kn service describe <service_name> -o url",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test",
"Service 'event-display' created in namespace 'test'.",
"tree ./",
"./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file",
"cat test/ksvc/event-display.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}",
"kn service describe event-display --target ./ --namespace test",
"Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON",
"kn service create -f test/ksvc/event-display.yaml",
"Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com",
"kn container add <container_name> --image <image_uri>",
"kn container add sidecar --image docker.io/example/sidecar",
"containers: - image: docker.io/example/sidecar name: sidecar resources: {}",
"kn container add <first_container_name> --image <image_uri> | kn container add <second_container_name> --image <image_uri> | kn service create <service_name> --image <image_uri> --extra-containers -",
"kn container add sidecar --image docker.io/example/sidecar:first | kn container add second --image docker.io/example/sidecar:second | kn service create my-service --image docker.io/example/my-app:latest --extra-containers -",
"kn service create <service_name> --image <image_uri> --extra-containers <filename>",
"kn service create my-service --image docker.io/example/my-app:latest --extra-containers my-extra-containers.yaml",
"kn domain create <domain_mapping_name> --ref <target_name>",
"kn domain create example.com --ref example-service",
"kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>",
"kn domain create example.com --ref ksvc:example-service:example-namespace",
"kn domain create <domain_mapping_name> --ref <kroute:route_name>",
"kn domain create example.com --ref kroute:example-route",
"kn domain list -n <domain_mapping_namespace>",
"kn domain describe <domain_mapping_name>",
"kn domain update --ref <target>",
"kn domain delete <domain_mapping_name>",
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"kn source container create <container_source_name> --image <image_uri> --sink <sink>",
"kn source container delete <container_source_name>",
"kn source container describe <container_source_name>",
"kn source container list",
"kn source container list -o yaml",
"kn source container update <container_source_name> --image <image_uri>",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource",
"kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn trigger create <trigger_name> --sink ksvc:<service_name>",
"oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source apiserver describe <source_name>",
"Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"kn trigger delete <trigger_name>",
"kn source apiserver delete <source_name>",
"oc delete -f authentication.yaml",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display",
"kn source ping describe test-ping-source",
"Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"kn delete pingsources.sources.knative.dev <ping_source_name>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display",
"kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display",
"kn source kafka describe <kafka_source_name>",
"Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h",
"oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!",
"kn func create -r <repository> -l <runtime> -t <template> <path>",
"kn func create -l typescript -t events examplefunc",
"Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: typescript Template: events Writing events to /home/user/demo/examplefunc",
"kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc",
"Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: node Template: hello-world Writing events to /home/user/demo/examplefunc",
"kn func run",
"kn func run --path=<directory_path>",
"kn func run --build",
"kn func run --build=false",
"kn func help run",
"kn func build",
"kn func build --builder pack",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build",
"kn func deploy [-n <namespace> -p <path> -i <image>]",
"Function deployed at: http://func.example.com",
"kn func list [-n <namespace> -p <path>]",
"NAME NAMESPACE RUNTIME URL READY example-function default node http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com True",
"kn service list -n <namespace>",
"NAME URL LATEST AGE CONDITIONS READY REASON example-function http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com example-function-gzl4c 16m 3 OK / 3 True",
"kn func info [-f <format> -n <namespace> -p <path>]",
"kn func info -p function/example-function",
"Function name: example-function Function is built in image: docker.io/user/example-function:latest Function is deployed as Knative Service: example-function Function is deployed in namespace: default Routes: http://example-function.default.apps.ci-ln-g9f36hb-d5d6b.origin-ci-int-aws.dev.rhcloud.com",
"kn func invoke",
"kn func invoke --type <event_type> --source <event_source> --data <event_data> --content-type <content_type> --id <event_ID> --format <format> --namespace <namespace>",
"kn func invoke --type ping --source example-ping --data \"Hello world!\" --content-type \"text/plain\" --id example-ID --format http --namespace my-ns",
"kn func invoke --file <path> --content-type <content-type>",
"kn func invoke --file ./test.json --content-type application/json",
"kn func invoke --path <path_to_function>",
"kn func invoke --path ./example/example-function",
"kn func invoke",
"kn func invoke --target <target>",
"kn func invoke --target remote",
"kn func invoke --target \"https://my-event-broker.example.com\"",
"kn func invoke --target local",
"kn func delete [<function_name> -n <namespace> -p <path>]",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"kn service create <service-name> --image <image> --tag <tag-value>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"Creating service 'event-display' in namespace 'default': 0.271s The Route is still working to reflect the latest desired specification. 0.580s Configuration \"event-display\" is waiting for a Revision to become ready. 3.857s 3.861s Ingress has not yet been reconciled. 4.270s Ready to serve. Service 'event-display' created with latest revision 'event-display-bxshg-1' and URL: http://event-display-default.apps-crc.testing",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --target ./ --namespace test",
"Service 'event-display' created in namespace 'test'.",
"tree ./",
"./ └── test └── ksvc └── event-display.yaml 2 directories, 1 file",
"cat test/ksvc/event-display.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: creationTimestamp: null name: event-display namespace: test spec: template: metadata: annotations: client.knative.dev/user-image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest creationTimestamp: null spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest name: \"\" resources: {} status: {}",
"kn service describe event-display --target ./ --namespace test",
"Name: event-display Namespace: test Age: URL: Revisions: Conditions: OK TYPE AGE REASON",
"kn service create -f test/ksvc/event-display.yaml",
"Creating service 'event-display' in namespace 'test': 0.058s The Route is still working to reflect the latest desired specification. 0.098s 0.168s Configuration \"event-display\" is waiting for a Revision to become ready. 23.377s 23.419s Ingress has not yet been reconciled. 23.534s Waiting for load balancer to be ready 23.723s Ready to serve. Service 'event-display' created to latest revision 'event-display-00001' is available at URL: http://event-display-test.apps.example.com",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-delivery namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest env: - name: RESPONSE value: \"Hello Serverless!\"",
"oc apply -f <filename>",
"oc get ksvc <service_name>",
"NAME URL LATESTCREATED LATESTREADY READY REASON event-delivery http://event-delivery-default.example.com event-delivery-4wsd2 event-delivery-4wsd2 True",
"curl http://event-delivery-default.example.com",
"curl https://event-delivery-default.example.com",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --insecure",
"Hello Serverless!",
"curl https://event-delivery-default.example.com --cacert <file>",
"Hello Serverless!",
"spec: ingress: kourier: service-type: LoadBalancer",
"oc -n knative-serving-ingress get svc kourier",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kourier LoadBalancer 172.30.51.103 a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com 80:31380/TCP,443:31390/TCP 67m",
"curl -H \"Host: hello-default.example.com\" a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com",
"Hello Serverless!",
"grpc.Dial( \"a83e86291bcdd11e993af02b7a65e514-33544245.us-east-1.elb.amazonaws.com:80\", grpc.WithAuthority(\"hello-default.example.com:80\"), grpc.WithInsecure(), )",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default namespace: example-namespace spec: podSelector: ingress: []",
"oc label namespace knative-serving knative.openshift.io/system-namespace=true",
"oc label namespace knative-serving-ingress knative.openshift.io/system-namespace=true",
"oc label namespace knative-eventing knative.openshift.io/system-namespace=true",
"oc label namespace knative-kafka knative.openshift.io/system-namespace=true",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: <network_policy_name> 1 namespace: <namespace> 2 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/system-namespace: \"true\" podSelector: {} policyTypes: - Ingress",
"apiVersion: serving.knative.dev/v1 kind: Service spec: template: spec: initContainers: - imagePullPolicy: IfNotPresent 1 image: <image_uri> 2 volumeMounts: 3 - name: data mountPath: /data",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example namespace: default annotations: networking.knative.dev/http-option: \"redirected\" spec:",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/min-scale: \"0\"",
"kn service create <service_name> --image <image_uri> --scale-min <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-min 2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/max-scale: \"10\"",
"kn service create <service_name> --image <image_uri> --scale-max <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --scale-max 10",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target: \"200\"",
"kn service create <service_name> --image <image_uri> --concurrency-target <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-target 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: spec: containerConcurrency: 50",
"kn service create <service_name> --image <image_uri> --concurrency-limit <integer>",
"kn service create example-service --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest --concurrency-limit 50",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: template: metadata: annotations: autoscaling.knative.dev/target-utilization-percentage: \"70\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0",
"kn service update <service_name> --tag @latest=example-tag",
"kn service update <service_name> --untag example-tag",
"kn service update <service_name> --traffic <revision>=<percentage>",
"kn service update example-service --traffic @latest=20,stable=80",
"kn service update example-service --traffic @latest=10,stable=60",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"oc get ksvc example-service -o=jsonpath='{.status.latestCreatedRevisionName}'",
"example-service-00001",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision",
"oc get ksvc <service_name>",
"oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'",
"spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route",
"oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"",
"spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2",
"spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> labels: <label_name>: <label_value> annotations: <annotation_name>: <annotation_value>",
"kn service create <service_name> --image=<image> --annotation <annotation_name>=<annotation_value> --label <label_value>=<label_value>",
"oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=<service_name> \\ 1 -l serving.knative.openshift.io/ingressNamespace=<service_namespace> \\ 2 -n knative-serving-ingress -o yaml | grep -e \"<label_name>: \\\"<label_value>\\\"\" -e \"<annotation_name>: <annotation_value>\" 3",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> annotations: serving.knative.openshift.io/disableRoute: \"true\" spec: template: spec: containers: - image: <image>",
"oc apply -f <filename>",
"kn service create <service_name> --image=gcr.io/knative-samples/helloworld-go --annotation serving.knative.openshift.io/disableRoute=true",
"USD oc get routes.route.openshift.io -l serving.knative.openshift.io/ingressName=USDKSERVICE_NAME -l serving.knative.openshift.io/ingressNamespace=USDKSERVICE_NAMESPACE -n knative-serving-ingress",
"No resources found in knative-serving-ingress namespace.",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 600s 1 name: <route_name> 2 namespace: knative-serving-ingress 3 spec: host: <service_host> 4 port: targetPort: http2 to: kind: Service name: kourier weight: 100 tls: insecureEdgeTerminationPolicy: Allow termination: edge 5 key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE---- wildcardPolicy: None",
"oc apply -f <filename>",
"oc label ksvc <service_name> networking.knative.dev/visibility=cluster-local",
"oc get ksvc",
"NAME URL LATESTCREATED LATESTREADY READY REASON hello http://hello.default.svc.cluster.local hello-tx2g7 hello-tx2g7 True",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: spec: delivery: deadLetterSink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: spec: broker: <broker_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: spec: delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: <channel_name> delivery: deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: <sink_name> backoffDelay: <duration> backoffPolicy: <policy_type> retry: <integer>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn source list-types",
"TYPE NAME DESCRIPTION ApiServerSource apiserversources.sources.knative.dev Watch and send Kubernetes API events to a sink PingSource pingsources.sources.knative.dev Periodically send ping events to a sink SinkBinding sinkbindings.sources.knative.dev Binding for connecting a PodSpecable to a sink",
"kn source list-types -o yaml",
"kn source list",
"NAME TYPE RESOURCE SINK READY a1 ApiServerSource apiserversources.sources.knative.dev ksvc:eshow2 True b1 SinkBinding sinkbindings.sources.knative.dev ksvc:eshow3 False p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"kn source list --type <event_source_type>",
"kn source list --type PingSource",
"NAME TYPE RESOURCE SINK READY p1 PingSource pingsources.sources.knative.dev ksvc:eshow1 True",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"kn source apiserver create <event_source_name> --sink broker:<broker_name> --resource \"event:v1\" --service-account <service_account_name> --mode Resource",
"kn service create <service_name> --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn trigger create <trigger_name> --sink ksvc:<service_name>",
"oc create deployment hello-node --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source apiserver describe <source_name>",
"Name: mysource Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 3m ServiceAccountName: events-sa Mode: Resource Sink: Name: default Namespace: default Kind: Broker (eventing.knative.dev/v1) Resources: Kind: event (v1) Controller: false Conditions: OK TYPE AGE REASON ++ Ready 3m ++ Deployed 3m ++ SinkProvided 3m ++ SufficientPermissions 3m ++ EventTypesProvided 3m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"kn trigger delete <trigger_name>",
"kn source apiserver delete <source_name>",
"oc delete -f authentication.yaml",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: v1 kind: ServiceAccount metadata: name: events-sa namespace: default 1 --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: event-watcher namespace: default 2 rules: - apiGroups: - \"\" resources: - events verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: k8s-ra-event-watcher namespace: default 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: event-watcher subjects: - kind: ServiceAccount name: events-sa namespace: default 4",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: name: testevents spec: serviceAccountName: events-sa mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display namespace: default spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: event-display-trigger namespace: default spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc create deployment hello-node --image=quay.io/openshift-knative/knative-eventing-sources-event-display",
"oc get apiserversource.sources.knative.dev testevents -o yaml",
"apiVersion: sources.knative.dev/v1alpha1 kind: ApiServerSource metadata: annotations: creationTimestamp: \"2020-04-07T17:24:54Z\" generation: 1 name: testevents namespace: default resourceVersion: \"62868\" selfLink: /apis/sources.knative.dev/v1alpha1/namespaces/default/apiserversources/testevents2 uid: 1603d863-bb06-4d1c-b371-f580b4db99fa spec: mode: Resource resources: - apiVersion: v1 controller: false controllerSelector: apiVersion: \"\" kind: \"\" name: \"\" uid: \"\" kind: Event labelSelector: {} serviceAccountName: events-sa sink: ref: apiVersion: eventing.knative.dev/v1 kind: Broker name: default",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.apiserver.resource.update datacontenttype: application/json Data, { \"apiVersion\": \"v1\", \"involvedObject\": { \"apiVersion\": \"v1\", \"fieldPath\": \"spec.containers{hello-node}\", \"kind\": \"Pod\", \"name\": \"hello-node\", \"namespace\": \"default\", .. }, \"kind\": \"Event\", \"message\": \"Started container\", \"metadata\": { \"name\": \"hello-node.159d7608e3a3572c\", \"namespace\": \"default\", . }, \"reason\": \"Started\", }",
"oc delete -f trigger.yaml",
"oc delete -f k8s-events.yaml",
"oc delete -f authentication.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source ping create test-ping-source --schedule \"*/2 * * * *\" --data '{\"message\": \"Hello world!\"}' --sink ksvc:event-display",
"kn source ping describe test-ping-source",
"Name: test-ping-source Namespace: default Annotations: sources.knative.dev/creator=developer, sources.knative.dev/lastModifier=developer Age: 15s Schedule: */2 * * * * Data: {\"message\": \"Hello world!\"} Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 8s ++ Deployed 8s ++ SinkProvided 15s ++ ValidSchedule 15s ++ EventTypeProvided 15s ++ ResourcesCorrect 15s",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 99e4f4f6-08ff-4bff-acf1-47f61ded68c9 time: 2020-04-07T16:16:00.000601161Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"kn delete pingsources.sources.knative.dev <ping_source_name>",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" 1 data: '{\"message\": \"Hello world!\"}' 2 sink: 3 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: name: test-ping-source spec: schedule: \"*/2 * * * *\" data: '{\"message\": \"Hello world!\"}' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pingsource.sources.knative.dev <ping_source_name> -oyaml",
"apiVersion: sources.knative.dev/v1 kind: PingSource metadata: annotations: sources.knative.dev/creator: developer sources.knative.dev/lastModifier: developer creationTimestamp: \"2020-04-07T16:11:14Z\" generation: 1 name: test-ping-source namespace: default resourceVersion: \"55257\" selfLink: /apis/sources.knative.dev/v1/namespaces/default/pingsources/test-ping-source uid: 3d80d50b-f8c7-4c1b-99f7-3ec00e0a8164 spec: data: '{ value: \"hello\" }' schedule: '*/2 * * * *' sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default",
"watch oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.sources.ping source: /apis/v1/namespaces/default/pingsources/test-ping-source id: 042ff529-240e-45ee-b40c-3a908129853e time: 2020-04-07T16:22:00.000791674Z datacontenttype: application/json Data, { \"message\": \"Hello world!\" }",
"oc delete -f <filename>",
"oc delete -f ping-source.yaml",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job 1 selector: matchLabels: app: heartbeat-cron sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"oc get sinkbindings.sources.knative.dev bind-heartbeat -oyaml",
"spec: sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display namespace: default subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: app: heartbeat-cron",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"kn source binding create bind-heartbeat --subject Job:batch/v1:app=heartbeat-cron --sink ksvc:event-display",
"apiVersion: batch/v1beta1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"* * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\" spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats:latest args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: \"true\"",
"oc apply -f <filename>",
"kn source binding describe bind-heartbeat",
"Name: bind-heartbeat Namespace: demo-2 Annotations: sources.knative.dev/creator=minikube-user, sources.knative.dev/lastModifier=minikub Age: 2m Subject: Resource: job (batch/v1) Selector: app: heartbeat-cron Sink: Name: event-display Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m",
"oc get pods",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.eventing.samples.heartbeat source: https://knative.dev/eventing-contrib/cmd/heartbeats/#event-test/mypod id: 2b72d7bf-c38f-4a98-a433-608fbcdd2596 time: 2019-10-18T15:23:20.809775386Z contenttype: application/json Extensions, beats: true heart: yes the: 42 Data, { \"id\": 1, \"label\": \"\" }",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: quay.io/openshift-knative/knative-eventing-sources-event-display:latest",
"apiVersion: batch/v1 kind: CronJob metadata: name: heartbeat-cron spec: # Run every minute schedule: \"*/1 * * * *\" jobTemplate: metadata: labels: app: heartbeat-cron bindings.knative.dev/include: true 1 spec: template: spec: restartPolicy: Never containers: - name: single-heartbeat image: quay.io/openshift-knative/heartbeats args: - --period=1 env: - name: ONE_SHOT value: \"true\" - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: apps/v1 kind: Deployment namespace: default name: mysubject",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: batch/v1 kind: Job namespace: default selector: matchLabels: working: example",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: subject: apiVersion: v1 kind: Pod namespace: default selector: - matchExpression: key: working operator: In values: - example - sample",
"apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: name: bind-heartbeat spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"oc label namespace <namespace> bindings.knative.dev/include=true",
"package main import ( \"context\" \"encoding/json\" \"flag\" \"fmt\" \"log\" \"os\" \"strconv\" \"time\" duckv1 \"knative.dev/pkg/apis/duck/v1\" cloudevents \"github.com/cloudevents/sdk-go/v2\" \"github.com/kelseyhightower/envconfig\" ) type Heartbeat struct { Sequence int `json:\"id\"` Label string `json:\"label\"` } var ( eventSource string eventType string sink string label string periodStr string ) func init() { flag.StringVar(&eventSource, \"eventSource\", \"\", \"the event-source (CloudEvents)\") flag.StringVar(&eventType, \"eventType\", \"dev.knative.eventing.samples.heartbeat\", \"the event-type (CloudEvents)\") flag.StringVar(&sink, \"sink\", \"\", \"the host url to heartbeat to\") flag.StringVar(&label, \"label\", \"\", \"a special label\") flag.StringVar(&periodStr, \"period\", \"5\", \"the number of seconds between heartbeats\") } type envConfig struct { // Sink URL where to send heartbeat cloud events Sink string `envconfig:\"K_SINK\"` // CEOverrides are the CloudEvents overrides to be applied to the outbound event. CEOverrides string `envconfig:\"K_CE_OVERRIDES\"` // Name of this pod. Name string `envconfig:\"POD_NAME\" required:\"true\"` // Namespace this pod exists in. Namespace string `envconfig:\"POD_NAMESPACE\" required:\"true\"` // Whether to run continuously or exit. OneShot bool `envconfig:\"ONE_SHOT\" default:\"false\"` } func main() { flag.Parse() var env envConfig if err := envconfig.Process(\"\", &env); err != nil { log.Printf(\"[ERROR] Failed to process env var: %s\", err) os.Exit(1) } if env.Sink != \"\" { sink = env.Sink } var ceOverrides *duckv1.CloudEventOverrides if len(env.CEOverrides) > 0 { overrides := duckv1.CloudEventOverrides{} err := json.Unmarshal([]byte(env.CEOverrides), &overrides) if err != nil { log.Printf(\"[ERROR] Unparseable CloudEvents overrides %s: %v\", env.CEOverrides, err) os.Exit(1) } ceOverrides = &overrides } p, err := cloudevents.NewHTTP(cloudevents.WithTarget(sink)) if err != nil { log.Fatalf(\"failed to create http protocol: %s\", err.Error()) } c, err := cloudevents.NewClient(p, cloudevents.WithUUIDs(), cloudevents.WithTimeNow()) if err != nil { log.Fatalf(\"failed to create client: %s\", err.Error()) } var period time.Duration if p, err := strconv.Atoi(periodStr); err != nil { period = time.Duration(5) * time.Second } else { period = time.Duration(p) * time.Second } if eventSource == \"\" { eventSource = fmt.Sprintf(\"https://knative.dev/eventing-contrib/cmd/heartbeats/#%s/%s\", env.Namespace, env.Name) log.Printf(\"Heartbeats Source: %s\", eventSource) } if len(label) > 0 && label[0] == '\"' { label, _ = strconv.Unquote(label) } hb := &Heartbeat{ Sequence: 0, Label: label, } ticker := time.NewTicker(period) for { hb.Sequence++ event := cloudevents.NewEvent(\"1.0\") event.SetType(eventType) event.SetSource(eventSource) event.SetExtension(\"the\", 42) event.SetExtension(\"heart\", \"yes\") event.SetExtension(\"beats\", true) if ceOverrides != nil && ceOverrides.Extensions != nil { for n, v := range ceOverrides.Extensions { event.SetExtension(n, v) } } if err := event.SetData(cloudevents.ApplicationJSON, hb); err != nil { log.Printf(\"failed to set cloudevents data: %s\", err.Error()) } log.Printf(\"sending cloudevent to %s\", sink) if res := c.Send(context.Background(), event); !cloudevents.IsACK(res) { log.Printf(\"failed to send cloudevent: %v\", res) } if env.OneShot { return } // Wait for next tick <-ticker.C } }",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: # This corresponds to a heartbeats image URI that you have built and published - image: gcr.io/knative-releases/knative.dev/eventing/cmd/heartbeats name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"example-pod\" - name: POD_NAMESPACE value: \"event-test\" sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: example-service",
"kn source container create <container_source_name> --image <image_uri> --sink <sink>",
"kn source container delete <container_source_name>",
"kn source container describe <container_source_name>",
"kn source container list",
"kn source container list -o yaml",
"kn source container update <container_source_name> --image <image_uri>",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: template: spec: containers: - image: quay.io/openshift-knative/heartbeats:latest name: heartbeats args: - --period=1 env: - name: POD_NAME value: \"mypod\" - name: POD_NAMESPACE value: \"event-test\"",
"apiVersion: sources.knative.dev/v1 kind: ContainerSource metadata: name: test-heartbeats spec: ceOverrides: extensions: extra: this is an extra attribute additional: 42",
"{ \"extensions\": { \"extra\": \"this is an extra attribute\", \"additional\": \"42\" } }",
"kn channel create <channel_name> --type <channel_type>",
"kn channel create mychannel --type messaging.knative.dev:v1:InMemoryChannel",
"Channel 'mychannel' created in namespace 'default'.",
"kn channel list",
"kn channel list NAME TYPE URL AGE READY REASON mychannel InMemoryChannel http://mychannel-kn-channel.default.svc.cluster.local 93s True",
"kn channel delete <channel_name>",
"apiVersion: messaging.knative.dev/v1 kind: Channel metadata: name: example-channel namespace: default",
"oc apply -f <filename>",
"apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1",
"oc apply -f <filename>",
"apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3",
"kn subscription create mysubscription --channel mychannel --sink ksvc:event-display",
"Subscription 'mysubscription' created in namespace 'default'.",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription delete <subscription_name>",
"kn subscription describe <subscription_name>",
"Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription update <subscription_name> --sink <sink_prefix>:<sink_name> \\ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2",
"kn subscription update mysubscription --sink ksvc:event-display",
"kn broker create <broker_name>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: annotations: eventing.knative.dev/injection: enabled name: <trigger_name> spec: broker: default subscriber: 1 ref: apiVersion: serving.knative.dev/v1 kind: Service name: <service_name>",
"oc apply -f <filename>",
"oc -n <namespace> get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection=enabled",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"NAME READY REASON URL AGE default True http://broker-ingress.knative-eventing.svc.cluster.local/test/default 3m56s",
"oc label namespace <namespace> eventing.knative.dev/injection-",
"oc -n <namespace> delete broker <broker_name>",
"oc -n <namespace> get broker <broker_name>",
"oc -n default get broker default",
"No resources found. Error from server (NotFound): brokers.eventing.knative.dev \"default\" not found",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 name: example-kafka-broker spec: config: apiVersion: v1 kind: ConfigMap name: kafka-broker-config 2 namespace: knative-eventing",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: annotations: eventing.knative.dev/broker.class: Kafka 1 kafka.eventing.knative.dev/external.topic: <topic_name> 2",
"oc apply -f <filename>",
"kn broker list",
"NAME URL AGE CONDITIONS READY REASON default http://broker-ingress.knative-eventing.svc.cluster.local/test/default 45s 5 OK / 5 True",
"kn broker describe <broker_name>",
"kn broker describe default",
"Name: default Namespace: default Annotations: eventing.knative.dev/broker.class=MTChannelBasedBroker, eventing.knative.dev/creato Age: 22s Address: URL: http://broker-ingress.knative-eventing.svc.cluster.local/default/default Conditions: OK TYPE AGE REASON ++ Ready 22s ++ Addressable 22s ++ FilterReady 22s ++ IngressReady 22s ++ TriggerChannelReady 22s",
"kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>",
"kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>",
"kn trigger list",
"NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True",
"kn trigger list -o json",
"kn trigger describe <trigger_name>",
"Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m",
"kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>",
"kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> --filter type=dev.knative.samples.helloworld --filter source=dev.knative.samples/helloworldsource --filter myextension=my-extension-value",
"kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]",
"kn trigger update <trigger_name> --filter type=knative.dev.event",
"kn trigger update <trigger_name> --filter type-",
"kn trigger update <trigger_name> --sink ksvc:my-event-sink",
"kn trigger delete <trigger_name>",
"kn trigger list",
"No triggers found.",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn service create event-display --image quay.io/openshift-knative/knative-eventing-sources-event-display",
"kn source kafka create <kafka_source_name> --servers <cluster_kafka_bootstrap>.kafka.svc:9092 --topics <topic_name> --consumergroup my-consumer-group --sink event-display",
"kn source kafka describe <kafka_source_name>",
"Name: example-kafka-source Namespace: kafka Age: 1h BootstrapServers: example-cluster-kafka-bootstrap.kafka.svc:9092 Topics: example-topic ConsumerGroup: example-consumer-group Sink: Name: event-display Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 1h ++ Deployed 1h ++ SinkProvided 1h",
"oc -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:latest-kafka-2.7.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list <cluster_kafka_bootstrap>:9092 --topic my-topic",
"oc logs USD(oc get pod -o name | grep event-display) -c user-container",
"☁\\ufe0f cloudevents.Event Validation: valid Context Attributes, specversion: 1.0 type: dev.knative.kafka.event source: /apis/v1/namespaces/default/kafkasources/example-kafka-source#example-topic subject: partition:46#0 id: partition:46/offset:0 time: 2021-03-10T11:21:49.4Z Extensions, traceparent: 00-161ff3815727d8755848ec01c866d1cd-7ff3916c44334678-00 Data, Hello!",
"kn source binding create bind-heartbeat --namespace sinkbinding-example --subject \"Job:batch/v1:app=heartbeat-cron\" --sink http://event-display.svc.cluster.local \\ 1 --ce-override \"sink=bound\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: <source_name> spec: consumerGroup: <group_name> 1 bootstrapServers: - <list_of_bootstrap_servers> topics: - <list_of_topics> 2 sink: - <list_of_sinks> 3",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: kafka-source spec: consumerGroup: knative-group bootstrapServers: - my-cluster-kafka-bootstrap.kafka:9092 topics: - knative-demo-topic sink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE kafkasource-kafka-source-5ca0248f-... 1/1 Running 0 13m",
"apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel metadata: name: example-channel namespace: default spec: numPartitions: 3 replicationFactor: 1",
"oc apply -f <filename>",
"apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink metadata: name: <sink-name> namespace: <namespace> spec: topic: <topic-name> bootstrapServers: - <bootstrap-server>",
"oc apply -f <filename>",
"apiVersion: sources.knative.dev/v1alpha2 kind: ApiServerSource metadata: name: <source-name> 1 namespace: <namespace> 2 spec: serviceAccountName: <service-account-name> 3 mode: Resource resources: - apiVersion: v1 kind: Event sink: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <sink-name> 4",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 default-ch-webhook: 2 default-ch-config: | clusterDefault: 3 apiVersion: messaging.knative.dev/v1 kind: InMemoryChannel spec: delivery: backoffDelay: PT0.5S backoffPolicy: exponential retry: 5 namespaceDefaults: 4 my-namespace: apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel spec: numPartitions: 1 replicationFactor: 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: 1 config-br-default-channel: channel-template-spec: | apiVersion: messaging.knative.dev/v1beta1 kind: KafkaChannel 2 spec: numPartitions: 6 3 replicationFactor: 3 4",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: defaultBrokerClass: Kafka 1 config: 2 config-br-defaults: 3 default-br-config: | clusterDefault: 4 brokerClass: Kafka apiVersion: v1 kind: ConfigMap name: kafka-broker-config 5 namespace: knative-eventing 6 namespaceDefaults: 7 my-namespace: brokerClass: MTChannelBasedBroker apiVersion: v1 kind: ConfigMap name: config-br-default-channel 8 namespace: knative-eventing 9",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: enable-scale-to-zero: \"false\" 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: autoscaler: scale-to-zero-grace-period: \"30s\" 1",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: ks namespace: knative-serving spec: high-availability: replicas: 2 deployments: - name: webhook resources: - container: webhook requests: cpu: 300m memory: 60Mi limits: cpu: 1000m memory: 1000Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: deployments: - name: eventing-controller resources: - container: eventing-controller requests: cpu: 300m memory: 100Mi limits: cpu: 1000m memory: 250Mi replicas: 3 labels: example-label: label annotations: example-annotation: annotation nodeSelector: disktype: hdd",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-volumes-emptydir: enabled",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: network: httpProtocol: \"redirected\"",
"spec: config: network: default-external-scheme: \"https\"",
"spec: config: network: default-external-scheme: \"http\"",
"spec: ingress: kourier: service-type: ClusterIP",
"spec: ingress: kourier: service-type: LoadBalancer",
"spec: config: features: \"kubernetes.podspec-persistent-volume-claim\": enabled \"kubernetes.podspec-persistent-volume-write\": enabled",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-pv-claim namespace: my-ns spec: accessModes: - ReadWriteMany storageClassName: ocs-storagecluster-cephfs resources: requests: storage: 1Gi",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: namespace: my-ns spec: template: spec: containers: volumeMounts: 1 - mountPath: /data name: mydata readOnly: false volumes: - name: mydata persistentVolumeClaim: 2 claimName: example-pv-claim readOnly: false 3",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving spec: config: features: kubernetes.podspec-init-containers: enabled",
"oc -n knative-serving create secret generic custom-secret --from-file=<secret_name>.crt=<path_to_certificate>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: controller-custom-certs: name: custom-secret type: Secret",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SSL --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n knative-eventing generic <secret_name> --from-literal=protocol=SASL_SSL --from-literal=sasl.mechanism=<sasl_mechanism> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: broker: enabled: true defaultConfig: authSecretName: <secret_name>",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-file=user.crt=certificate.pem --from-file=user.key=key.pem",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: tls-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9094 enabled: true source: enabled: true",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-literal=tls.enabled=true --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" --from-literal=user=\"my-sasl-user\"",
"oc edit knativekafka",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: <kafka_auth_secret> authSecretNamespace: <kafka_auth_secret_namespace> bootstrapServers: <bootstrap_servers> enabled: true source: enabled: true",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: namespace: knative-eventing name: knative-kafka spec: channel: authSecretName: scram-user authSecretNamespace: kafka bootstrapServers: eventing-kafka-bootstrap.kafka.svc:9093 enabled: true source: enabled: true",
"oc create secret -n <namespace> generic <kafka_auth_secret> --from-file=ca.crt=caroot.pem --from-literal=password=\"SecretPassword\" --from-literal=saslType=\"SCRAM-SHA-512\" \\ 1 --from-literal=user=\"my-sasl-user\"",
"apiVersion: sources.knative.dev/v1beta1 kind: KafkaSource metadata: name: example-source spec: net: sasl: enable: true user: secretKeyRef: name: <kafka_auth_secret> key: user password: secretKeyRef: name: <kafka_auth_secret> key: password saslType: secretKeyRef: name: <kafka_auth_secret> key: saslType tls: enable: true caCert: 1 secretKeyRef: name: <kafka_auth_secret> key: ca.crt",
"apiVersion: v1 kind: ConfigMap metadata: name: <config_map_name> 1 namespace: <namespace> 2 data: default.topic.partitions: <integer> 3 default.topic.replication.factor: <integer> 4 bootstrap.servers: <list_of_servers> 5",
"apiVersion: v1 kind: ConfigMap metadata: name: kafka-broker-config namespace: knative-eventing data: default.topic.partitions: \"10\" default.topic.replication.factor: \"3\" bootstrap.servers: \"my-cluster-kafka-bootstrap.kafka:9092\"",
"oc apply -f <config_map_filename>",
"apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: <broker_name> 1 namespace: <namespace> 2 annotations: eventing.knative.dev/broker.class: Kafka 3 spec: config: apiVersion: v1 kind: ConfigMap name: <config_map_name> 4 namespace: <namespace> 5",
"oc apply -f <broker_filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: hello 1 namespace: default 2 spec: template: spec: containers: - image: docker.io/openshift/hello-openshift 3 env: - name: RESPONSE 4 value: \"Hello Serverless!\"",
"openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=Example Inc./CN=example.com' -keyout root.key -out root.crt",
"openssl req -nodes -newkey rsa:2048 -subj \"/CN=*.apps.openshift.example.com/O=Example Inc.\" -keyout wildcard.key -out wildcard.csr",
"openssl x509 -req -days 365 -set_serial 0 -CA root.crt -CAkey root.key -in wildcard.csr -out wildcard.crt",
"oc create -n istio-system secret tls wildcard-certs --key=wildcard.key --cert=wildcard.crt",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: 1 - knative-serving - <namespace>",
"oc apply -f <filename>",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-ingress-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs> 1 --- apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 8081 name: http protocol: HTTP 2 hosts: - \"*\" --- apiVersion: v1 kind: Service metadata: name: knative-local-gateway namespace: istio-system labels: experimental.istio.io/disable-gateway-port-translation: \"true\" spec: type: ClusterIP selector: istio: ingressgateway ports: - name: http2 port: 80 targetPort: 8081",
"apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: knative-local-gateway namespace: knative-serving spec: selector: istio: ingressgateway servers: - port: number: 443 name: https protocol: HTTPS hosts: - \"*\" tls: mode: SIMPLE credentialName: <wildcard_certs>",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: istio: enabled: true 1 deployments: 2 - name: activator annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\" - name: autoscaler annotations: \"sidecar.istio.io/inject\": \"true\" \"sidecar.istio.io/rewriteAppHTTPProbers\": \"true\"",
"oc apply -f <filename>",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> namespace: <namespace> 1 annotations: serving.knative.openshift.io/enablePassthrough: \"true\" 2 spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 3 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" spec: containers: - image: <image_url>",
"oc apply -f <filename>",
"curl --cacert root.crt <service_url>",
"curl --cacert root.crt https://hello-default.apps.openshift.example.com",
"Hello Openshift!",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving spec: config: observability: metrics.backend-destination: \"prometheus\"",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ns namespace: knative-serving spec: ingress: - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" podSelector: {}",
"spec: proxy: networking: trafficControl: inbound: excludedPorts: - 8444",
"apiVersion: maistra.io/v1 kind: ServiceMeshMemberRoll metadata: name: default namespace: istio-system spec: members: - <namespace> 1",
"oc apply -f <filename>",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-serving-system-namespace namespace: <namespace> 1 spec: ingress: - from: - namespaceSelector: matchLabels: knative.openshift.io/part-of: \"openshift-serverless\" podSelector: {} policyTypes: - Ingress",
"oc label namespace knative-serving knative.openshift.io/part-of=openshift-serverless",
"oc label namespace knative-serving-ingress knative.openshift.io/part-of=openshift-serverless",
"oc apply -f <filename>",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving annotations: serverless.openshift.io/enable-secret-informer-filtering: \"true\" 1 spec: ingress: istio: enabled: true deployments: - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: activator - annotations: sidecar.istio.io/inject: \"true\" sidecar.istio.io/rewriteAppHTTPProbers: \"true\" name: autoscaler",
"apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-cpu-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(rate(container_cpu_usage_seconds_total{container!=\"POD\",container!=\"\",pod!=\"\"}[1m]), \"pod\", \"USD1\", \"pod\", \"(.*)\") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=\"\"} )",
"apiVersion: metering.openshift.io/v1 kind: ReportDataSource metadata: name: knative-service-memory-usage spec: prometheusMetricsImporter: query: > sum by(namespace, label_serving_knative_dev_service, label_serving_knative_dev_revision) ( label_replace(container_memory_usage_bytes{container!=\"POD\", container!=\"\",pod!=\"\"}, \"pod\", \"USD1\", \"pod\", \"(.*)\") * on(pod, namespace) group_left(label_serving_knative_dev_service, label_serving_knative_dev_revision) kube_pod_labels{label_serving_knative_dev_service!=\"\"} )",
"oc apply -f <datasource_name>.yaml",
"oc apply -f knative-service-memory-usage.yaml",
"apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-cpu-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-cpu-usage name: KnativeServiceCpuUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_cpu_seconds type: double unit: cpu_core_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min(\"timestamp\") as data_start, max(\"timestamp\") as data_end, sum(amount * \"timeprecision\") AS service_cpu_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceCpuUsageDataSource |} WHERE \"timestamp\" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND \"timestamp\" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']",
"apiVersion: metering.openshift.io/v1 kind: ReportQuery metadata: name: knative-service-memory-usage spec: inputs: - name: ReportingStart type: time - name: ReportingEnd type: time - default: knative-service-memory-usage name: KnativeServiceMemoryUsageDataSource type: ReportDataSource columns: - name: period_start type: timestamp unit: date - name: period_end type: timestamp unit: date - name: namespace type: varchar unit: kubernetes_namespace - name: service type: varchar - name: data_start type: timestamp unit: date - name: data_end type: timestamp unit: date - name: service_usage_memory_byte_seconds type: double unit: byte_seconds query: | SELECT timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart| prestoTimestamp |}' AS period_start, timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' AS period_end, labels['namespace'] as project, labels['label_serving_knative_dev_service'] as service, min(\"timestamp\") as data_start, max(\"timestamp\") as data_end, sum(amount * \"timeprecision\") AS service_usage_memory_byte_seconds FROM {| dataSourceTableName .Report.Inputs.KnativeServiceMemoryUsageDataSource |} WHERE \"timestamp\" >= timestamp '{| default .Report.ReportingStart .Report.Inputs.ReportingStart | prestoTimestamp |}' AND \"timestamp\" < timestamp '{| default .Report.ReportingEnd .Report.Inputs.ReportingEnd | prestoTimestamp |}' GROUP BY labels['namespace'],labels['label_serving_knative_dev_service']",
"oc apply -f <query-name>.yaml",
"oc apply -f knative-service-memory-usage.yaml",
"apiVersion: metering.openshift.io/v1 kind: Report metadata: name: knative-service-cpu-usage spec: reportingStart: '2019-06-01T00:00:00Z' 1 reportingEnd: '2019-06-30T23:59:59Z' 2 query: knative-service-cpu-usage 3 runImmediately: true",
"oc apply -f <report-name>.yml",
"oc get report",
"NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE knative-service-cpu-usage knative-service-cpu-usage Finished 2019-06-30T23:59:59Z 10h",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: high-availability: replicas: 3",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: high-availability: replicas: 3",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: high-availability: replicas: 3",
"spec: logStore: elasticsearch: resources: limits: cpu: memory: 16Gi requests: cpu: 500m memory: 16Gi type: \"elasticsearch\" collection: logs: fluentd: resources: limits: cpu: memory: requests: cpu: memory: type: \"fluentd\" visualization: kibana: resources: limits: cpu: memory: requests: cpu: memory: type: kibana",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 32Gi requests: cpu: 3 memory: 32Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: limits: memory: 1Gi requests: cpu: 500m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: resources: limits: memory: 1Gi requests: cpu: 200m memory: 1Gi",
"oc -n openshift-logging get route kibana",
"oc -n openshift-logging get route kibana",
"kubernetes.namespace_name:default AND kubernetes.labels.serving_knative_dev\\/service:{service_name}",
"package main import ( \"fmt\" \"log\" \"net/http\" \"os\" \"github.com/prometheus/client_golang/prometheus\" 1 \"github.com/prometheus/client_golang/prometheus/promauto\" \"github.com/prometheus/client_golang/prometheus/promhttp\" ) var ( opsProcessed = promauto.NewCounter(prometheus.CounterOpts{ 2 Name: \"myapp_processed_ops_total\", Help: \"The total number of processed events\", }) ) func handler(w http.ResponseWriter, r *http.Request) { log.Print(\"helloworld: received a request\") target := os.Getenv(\"TARGET\") if target == \"\" { target = \"World\" } fmt.Fprintf(w, \"Hello %s!\\n\", target) opsProcessed.Inc() 3 } func main() { log.Print(\"helloworld: starting server...\") port := os.Getenv(\"PORT\") if port == \"\" { port = \"8080\" } http.HandleFunc(\"/\", handler) // Separate server for metrics requests go func() { 4 mux := http.NewServeMux() server := &http.Server{ Addr: fmt.Sprintf(\":%s\", \"9095\"), Handler: mux, } mux.Handle(\"/metrics\", promhttp.Handler()) log.Printf(\"prometheus: listening on port %s\", 9095) log.Fatal(server.ListenAndServe()) }() // Use same port as normal requests for metrics //http.Handle(\"/metrics\", promhttp.Handler()) 5 log.Printf(\"helloworld: listening on port %s\", port) log.Fatal(http.ListenAndServe(fmt.Sprintf(\":%s\", port), nil)) }",
"apiVersion: serving.knative.dev/v1 1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: spec: containers: - image: docker.io/skonto/helloworld-go:metrics resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\" --- apiVersion: monitoring.coreos.com/v1 2 kind: ServiceMonitor metadata: labels: name: helloworld-go-sm spec: endpoints: - port: queue-proxy-metrics scheme: http - port: app-metrics scheme: http namespaceSelector: {} selector: matchLabels: name: helloworld-go-sm --- apiVersion: v1 3 kind: Service metadata: labels: name: helloworld-go-sm name: helloworld-go-sm spec: ports: - name: queue-proxy-metrics port: 9091 protocol: TCP targetPort: 9091 - name: app-metrics port: 9095 protocol: TCP targetPort: 9095 selector: serving.knative.dev/service: helloworld-go type: ClusterIP",
"hello_route=USD(oc get ksvc helloworld-go -n ns1 -o jsonpath='{.status.url}') && curl USDhello_route",
"Hello Go Sample v1!",
"revision_app_request_count{namespace=\"ns1\", job=\"helloworld-go-sm\"}",
"myapp_processed_ops_total{namespace=\"ns1\", job=\"helloworld-go-sm\"}",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <namespace> spec: mode: deployment config: | receivers: zipkin: processors: exporters: jaeger: endpoint: jaeger-all-in-one-inmemory-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" logging: service: pipelines: traces: receivers: [zipkin] processors: [] exporters: [jaeger, logging]",
"oc get pods -n <namespace>",
"NAME READY STATUS RESTARTS AGE cluster-collector-collector-85c766b5c-b5g99 1/1 Running 0 5m56s jaeger-all-in-one-inmemory-ccbc9df4b-ndkl5 2/2 Running 0 15m",
"oc get svc -n <namespace> | grep headless",
"cluster-collector-collector-headless ClusterIP None <none> 9411/TCP 7m28s jaeger-all-in-one-inmemory-collector-headless ClusterIP None <none> 9411/TCP,14250/TCP,14267/TCP,14268/TCP 16m",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: backend: \"zipkin\" zipkin-endpoint: \"http://cluster-collector-collector-headless.tracing-system.svc:9411/api/v2/spans\" debug: \"true\" sample-rate: \"0.1\" 1",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: helloworld-go spec: template: metadata: labels: app: helloworld-go annotations: autoscaling.knative.dev/minScale: \"1\" autoscaling.knative.dev/target: \"1\" spec: containers: - image: quay.io/openshift-knative/helloworld:v1.2 imagePullPolicy: Always resources: requests: cpu: \"200m\" env: - name: TARGET value: \"Go Sample v1\"",
"curl https://helloworld-go.example.com",
"oc get route jaeger-all-in-one-inmemory -o jsonpath='{.spec.host}' -n <namespace>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger namespace: default",
"apiVersion: operator.knative.dev/v1alpha1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: tracing: sample-rate: \"0.1\" 1 backend: zipkin 2 zipkin-endpoint: \"http://jaeger-collector.default.svc.cluster.local:9411/api/v2/spans\" 3 debug: \"false\" 4",
"oc get route jaeger -n default",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD jaeger jaeger-default.apps.example.com jaeger-query <all> reencrypt None",
"oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s",
"oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:<image_version_tag>",
"oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:1.14.0",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: <namespace> spec: jwtRules: - issuer: [email protected] jwksUri: https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/jwks.json",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: allowlist-by-paths namespace: <namespace> spec: action: ALLOW rules: - to: - operation: paths: - /metrics 1 - /healthz 2",
"oc apply -f <filename>",
"apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: require-jwt namespace: <namespace> spec: action: ALLOW rules: - from: - source: requestPrincipals: [\"[email protected]/[email protected]\"]",
"oc apply -f <filename>",
"curl http://hello-example-1-default.apps.mycluster.example.com/",
"RBAC: access denied",
"TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.8/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -",
"curl -H \"Authorization: Bearer USDTOKEN\" http://hello-example-1-default.apps.example.com",
"Hello OpenShift!",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: <service_name> spec: template: metadata: annotations: sidecar.istio.io/inject: \"true\" 1 sidecar.istio.io/rewriteAppHTTPProbers: \"true\" 2",
"oc apply -f <filename>",
"apiVersion: authentication.istio.io/v1alpha1 kind: Policy metadata: name: default namespace: <namespace> spec: origins: - jwt: issuer: [email protected] jwksUri: \"https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/jwks.json\" triggerRules: - excludedPaths: - prefix: /metrics 1 - prefix: /healthz 2 principalBinding: USE_ORIGIN",
"oc apply -f <filename>",
"curl http://hello-example-default.apps.mycluster.example.com/",
"Origin authentication failed.",
"TOKEN=USD(curl https://raw.githubusercontent.com/istio/istio/release-1.6/security/tools/jwt/samples/demo.jwt -s) && echo \"USDTOKEN\" | cut -d '.' -f2 - | base64 --decode -",
"curl http://hello-example-default.apps.mycluster.example.com/ -H \"Authorization: Bearer USDTOKEN\"",
"Hello OpenShift!",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> 1 namespace: <namespace> 2 spec: ref: name: <target_name> 3 kind: <target_type> 4 apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: example-service kind: Service apiVersion: serving.knative.dev/v1",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: example.com namespace: default spec: ref: name: example-route kind: Route apiVersion: serving.knative.dev/v1",
"oc apply -f <filename>",
"kn domain create <domain_mapping_name> --ref <target_name>",
"kn domain create example.com --ref example-service",
"kn domain create <domain_mapping_name> --ref <ksvc:service_name:service_namespace>",
"kn domain create example.com --ref ksvc:example-service:example-namespace",
"kn domain create <domain_mapping_name> --ref <kroute:route_name>",
"kn domain create example.com --ref kroute:example-route",
"oc create secret tls <tls_secret_name> --cert=<path_to_certificate_file> --key=<path_to_key_file>",
"\"networking.internal.knative.dev/certificate-uid\": \"<value>\"",
"apiVersion: serving.knative.dev/v1alpha1 kind: DomainMapping metadata: name: <domain_name> namespace: <namespace> spec: ref: name: <service_name> kind: Service apiVersion: serving.knative.dev/v1 TLS block specifies the secret to be used tls: secretName: <tls_secret_name>",
"oc get domainmapping <domain_name>",
"NAME URL READY REASON example.com https://example.com True",
"curl https://<domain_name>",
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"kn func create -r <repository> -l <runtime> -t <template> <path>",
"kn func create -l typescript -t events examplefunc",
"Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: typescript Template: events Writing events to /home/user/demo/examplefunc",
"kn func create -r https://github.com/boson-project/templates/ -l node -t hello-world examplefunc",
"Project path: /home/user/demo/examplefunc Function name: examplefunc Runtime: node Template: hello-world Writing events to /home/user/demo/examplefunc",
"kn func run",
"kn func run --path=<directory_path>",
"kn func run --build",
"kn func run --build=false",
"kn func help run",
"kn func build",
"kn func build --builder pack",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build",
"oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.22.0/pipelines/resources/tekton/task/func-buildpacks/0.1/func-buildpacks.yaml",
"oc apply -f https://raw.githubusercontent.com/openshift-knative/kn-plugin-func/serverless-1.22.0/pipelines/resources/tekton/task/func-deploy/0.1/func-deploy.yaml",
"kn func create <function_name> -l <runtime>",
"build: git 1 git: url: <git_repository_url> 2 revision: main 3 contextDir: <directory_path> 4",
"kn func deploy",
"🕕 Creating Pipeline resources Please provide credentials for image registry used by Pipeline. ? Server: https://index.docker.io/v1/ ? Username: my-repo ? Password: ******** Function deployed at URL: http://test-function.default.svc.cluster.local",
"kn func deploy [-n <namespace> -p <path> -i <image>]",
"Function deployed at: http://func.example.com",
"kn func invoke",
". ├── func.yaml 1 ├── index.js 2 ├── package.json 3 ├── README.md └── test 4 ├── integration.js └── unit.js",
"npm install --save opossum",
"function handle(context, data)",
"// Expects to receive a CloudEvent with customer data function handle(context, customer) { // process the customer const processed = handle(customer); return context.cloudEventResponse(customer) .source('/handle') .type('fn.process.customer') .response(); }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context, data)",
"function handle(context, customer) { // process customer and return a new CloudEvent return new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) }",
"function handle(context, customer) { // process customer and return custom headers // the response will be '204 No content' return { headers: { customerid: customer.id } }; }",
"function handle(context, customer) { // process customer if (customer.restricted) { return { statusCode: 451 } } }",
"function handle(context, customer) { // process customer if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm test",
". ├── func.yaml 1 ├── package.json 2 ├── package-lock.json ├── README.md ├── src │ └── index.ts 3 ├── test 4 │ ├── integration.ts │ └── unit.ts └── tsconfig.json",
"npm install --save opossum",
"function handle(context:Context): string",
"// Expects to receive a CloudEvent with customer data export function handle(context: Context, cloudevent?: CloudEvent): CloudEvent { // process the customer const customer = cloudevent.data; const processed = processCustomer(customer); return context.cloudEventResponse(customer) .source('/customer/process') .type('customer.processed') .response(); }",
"// Invokable is the expeted Function signature for user functions export interface Invokable { (context: Context, cloudevent?: CloudEvent): any } // Logger can be used for structural logging to the console export interface Logger { debug: (msg: any) => void, info: (msg: any) => void, warn: (msg: any) => void, error: (msg: any) => void, fatal: (msg: any) => void, trace: (msg: any) => void, } // Context represents the function invocation context, and provides // access to the event itself as well as raw HTTP objects. export interface Context { log: Logger; req: IncomingMessage; query?: Record<string, any>; body?: Record<string, any>|string; method: string; headers: IncomingHttpHeaders; httpVersion: string; httpVersionMajor: number; httpVersionMinor: number; cloudevent: CloudEvent; cloudEventResponse(data: string|object): CloudEventResponse; } // CloudEventResponse is a convenience class used to create // CloudEvents on function returns export interface CloudEventResponse { id(id: string): CloudEventResponse; source(source: string): CloudEventResponse; type(type: string): CloudEventResponse; version(version: string): CloudEventResponse; response(): CloudEvent; }",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"function handle(context: Context, cloudevent?: CloudEvent): CloudEvent",
"export const handle: Invokable = function ( context: Context, cloudevent?: CloudEvent ): Message { // process customer and return a new CloudEvent const customer = cloudevent.data; return HTTP.binary( new CloudEvent({ source: 'customer.processor', type: 'customer.processed' }) ); };",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer and return custom headers const customer = cloudevent.data as Record<string, any>; return { headers: { 'customer-id': customer.id } }; }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, any> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { return { statusCode: 451 } } // business logic, then return { statusCode: 240 } }",
"export function handle(context: Context, cloudevent?: CloudEvent): Record<string, string> { // process customer const customer = cloudevent.data as Record<string, any>; if (customer.restricted) { const err = new Error('Unavailable for legal reasons'); err.statusCode = 451; throw err; } }",
"npm install",
"npm test",
"fn ├── README.md ├── func.yaml 1 ├── go.mod 2 ├── go.sum ├── handle.go └── handle_test.go",
"go get gopkg.in/[email protected]",
"func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Read body body, err := ioutil.ReadAll(req.Body) defer req.Body.Close() if err != nil { http.Error(res, err.Error(), 500) return } // Process body and function logic // }",
"Handle() Handle() error Handle(context.Context) Handle(context.Context) error Handle(cloudevents.Event) Handle(cloudevents.Event) error Handle(context.Context, cloudevents.Event) Handle(context.Context, cloudevents.Event) error Handle(cloudevents.Event) *cloudevents.Event Handle(cloudevents.Event) (*cloudevents.Event, error) Handle(context.Context, cloudevents.Event) *cloudevents.Event Handle(context.Context, cloudevents.Event) (*cloudevents.Event, error)",
"{ \"customerId\": \"0123456\", \"productId\": \"6543210\" }",
"type Purchase struct { CustomerId string `json:\"customerId\"` ProductId string `json:\"productId\"` } func Handle(ctx context.Context, event cloudevents.Event) (err error) { purchase := &Purchase{} if err = event.DataAs(purchase); err != nil { fmt.Fprintf(os.Stderr, \"failed to parse incoming CloudEvent %s\\n\", err) return } // }",
"func Handle(ctx context.Context, event cloudevents.Event) { bytes, err := json.Marshal(event) // }",
"func Handle(ctx context.Context, res http.ResponseWriter, req *http.Request) { // Set response res.Header().Add(\"Content-Type\", \"text/plain\") res.Header().Add(\"Content-Length\", \"3\") res.WriteHeader(200) _, err := fmt.Fprintf(res, \"OK\\n\") if err != nil { fmt.Fprintf(os.Stderr, \"error or response write: %v\", err) } }",
"func Handle(ctx context.Context, event cloudevents.Event) (resp *cloudevents.Event, err error) { // response := cloudevents.NewEvent() response.SetID(\"example-uuid-32943bac6fea\") response.SetSource(\"purchase/getter\") response.SetType(\"purchase\") // Set the data from Purchase type response.SetData(cloudevents.ApplicationJSON, Purchase{ CustomerId: custId, ProductId: prodId, }) // OR set the data directly from map response.SetData(cloudevents.ApplicationJSON, map[string]string{\"customerId\": custId, \"productId\": prodId}) // Validate the response resp = &response if err = resp.Validate(); err != nil { fmt.Printf(\"invalid event created. %v\", err) } return }",
"go test",
"fn ├── func.py 1 ├── func.yaml 2 ├── requirements.txt 3 └── test_func.py 4",
"def main(context: Context): \"\"\" The context parameter contains the Flask request object and any CloudEvent received with the request. \"\"\" print(f\"Method: {context.request.method}\") print(f\"Event data {context.cloud_event.data}\") # ... business logic here",
"def main(context: Context): body = { \"message\": \"Howdy!\" } headers = { \"content-type\": \"application/json\" } return body, 200, headers",
"@event(\"event_source\"=\"/my/function\", \"event_type\"=\"my.type\") def main(context): # business logic here data = do_something() # more data processing return data",
"pip install -r requirements.txt",
"python3 test_func.py",
". ├── func.yaml 1 ├── mvnw ├── mvnw.cmd ├── pom.xml 2 ├── README.md └── src ├── main │ ├── java │ │ └── functions │ │ ├── Function.java 3 │ │ ├── Input.java │ │ └── Output.java │ └── resources │ └── application.properties └── test └── java └── functions 4 ├── FunctionTest.java └── NativeFunctionIT.java",
"<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>org.assertj</groupId> <artifactId>assertj-core</artifactId> <version>3.8.0</version> <scope>test</scope> </dependency> </dependencies>",
"public class Functions { @Funq public void processPurchase(Purchase purchase) { // process the purchase } }",
"public class Purchase { private long customerId; private long productId; // getters and setters }",
"import io.quarkus.funqy.Funq; import io.quarkus.funqy.knative.events.CloudEvent; public class Input { private String message; // getters and setters } public class Output { private String message; // getters and setters } public class Functions { @Funq public Output withBeans(Input in) { // function body } @Funq public CloudEvent<Output> withCloudEvent(CloudEvent<Input> in) { // function body } @Funq public void withBinary(byte[] in) { // function body } }",
"curl \"http://localhost:8080/withBeans\" -X POST -H \"Content-Type: application/json\" -d '{\"message\": \"Hello there.\"}'",
"curl \"http://localhost:8080/withBeans?message=Hello%20there.\" -X GET",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/json\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBeans\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" -d '{\"message\": \"Hello there.\"}'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d '{ \"data\": {\"message\":\"Hello there.\"}, \"datacontenttype\": \"application/json\", \"id\": \"42\", \"source\": \"curl\", \"type\": \"withBeans\", \"specversion\": \"1.0\"}'",
"curl \"http://localhost:8080/\" -X POST -H \"Content-Type: application/octet-stream\" -H \"Ce-SpecVersion: 1.0\" -H \"Ce-Type: withBinary\" -H \"Ce-Source: cURL\" -H \"Ce-Id: 42\" --data-binary '@img.jpg'",
"curl http://localhost:8080/ -H \"Content-Type: application/cloudevents+json\" -d \"{ \\\"data_base64\\\": \\\"USD(base64 --wrap=0 img.jpg)\\\", \\\"datacontenttype\\\": \\\"application/octet-stream\\\", \\\"id\\\": \\\"42\\\", \\\"source\\\": \\\"curl\\\", \\\"type\\\": \\\"withBinary\\\", \\\"specversion\\\": \\\"1.0\\\"}\"",
"public class Functions { private boolean _processPurchase(Purchase purchase) { // do stuff } public CloudEvent<Void> processPurchase(CloudEvent<Purchase> purchaseEvent) { System.out.println(\"subject is: \" + purchaseEvent.subject()); if (!_processPurchase(purchaseEvent.data())) { return CloudEventBuilder.create() .type(\"purchase.error\") .build(); } return CloudEventBuilder.create() .type(\"purchase.success\") .build(); } }",
"public class Functions { @Funq public List<Purchase> getPurchasesByName(String name) { // logic to retrieve purchases } }",
"public class Functions { public List<Integer> getIds(); public Purchase[] getPurchasesByName(String name); public String getNameById(int id); public Map<String,Integer> getNameIdMapping(); public void processImage(byte[] img); }",
"./mvnw test",
"buildEnvs: - name: EXAMPLE1 value: one",
"buildEnvs: - name: EXAMPLE1 value: '{{ env:LOCAL_ENV_VAR }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE1 1 value: value - name: EXAMPLE2 2 value: '{{ env:LOCAL_ENV_VALUE }}' - name: EXAMPLE3 3 value: '{{ secret:mysecret:key }}' - name: EXAMPLE4 4 value: '{{ configMap:myconfigmap:key }}' - value: '{{ secret:mysecret2 }}' 5 - value: '{{ configMap:myconfigmap2 }}' 6",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret 1 path: /workspace/secret - configMap: myconfigmap 2 path: /workspace/configmap",
"name: test namespace: \"\" runtime: go options: scale: min: 0 max: 10 metric: concurrency target: 75 utilization: 75 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 1000m memory: 256Mi concurrency: 100",
"labels: - key: role value: backend",
"labels: - key: author value: '{{ env:USER }}'",
"{{ env:ENV_VAR }}",
"name: test namespace: \"\" runtime: go envs: - name: MY_API_KEY value: '{{ env:API_KEY }}'",
"kn func config",
"kn func config ? What do you want to configure? Volumes ? What operation do you want to perform? List Configured Volumes mounts: - Secret \"mysecret\" mounted at path: \"/workspace/secret\" - Secret \"mysecret2\" mounted at path: \"/workspace/secret2\"",
"kn func config ├─> Environment variables │ ├─> Add │ │ ├─> ConfigMap: Add all key-value pairs from a config map │ │ ├─> ConfigMap: Add value from a key in a config map │ │ ├─> Secret: Add all key-value pairs from a secret │ │ └─> Secret: Add value from a key in a secret │ ├─> List: List all configured environment variables │ └─> Remove: Remove a configured environment variable └─> Volumes ├─> Add │ ├─> ConfigMap: Mount a config map as a volume │ └─> Secret: Mount a secret as a volume ├─> List: List all configured volumes └─> Remove: Remove a configured volume",
"kn func deploy -p test",
"kn func config envs [-p <function-project-path>]",
"kn func config envs add [-p <function-project-path>]",
"kn func config envs remove [-p <function-project-path>]",
"kn func config volumes [-p <function-project-path>]",
"kn func config volumes add [-p <function-project-path>]",
"kn func config volumes remove [-p <function-project-path>]",
"name: test namespace: \"\" runtime: go volumes: - secret: mysecret path: /workspace/secret",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/secret-addresses",
"name: test namespace: \"\" runtime: go volumes: - configMap: myconfigmap path: /workspace/configmap",
"name: test namespace: \"\" runtime: go volumes: - configMap: addresses path: /workspace/configmap-addresses",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ secret:mysecret:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret:userid }}'",
"name: test namespace: \"\" runtime: go envs: - name: EXAMPLE value: '{{ configMap:myconfigmap:key }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap:userid }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ secret:mysecret }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailssecret }}'",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:myconfigmap }}' 1",
"name: test namespace: \"\" runtime: go envs: - value: '{{ configMap:userdetailsmap }}'",
"name: test namespace: \"\" runtime: go annotations: <annotation_name>: \"<annotation_value>\" 1",
"name: test namespace: \"\" runtime: go annotations: author: \"[email protected]\"",
"function handle(context) { context.log.info(\"Processing customer\"); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"function handle(context) { // Log the 'name' query parameter context.log.info(context.query.name); // Query parameters are also attached to the context context.log.info(context.name); }",
"kn func invoke --target 'http://example.com?name=tiger'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"function handle(context) { // log the incoming request body's 'hello' parameter context.log.info(context.body.hello); }",
"kn func invoke -d '{\"Hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"function handle(context) { context.log.info(context.headers[\"custom-header\"]); }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"Processing customer\"}",
"export function handle(context: Context): string { // log the 'name' query parameter if (context.query) { context.log.info((context.query as Record<string, string>).name); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"name\": \"tiger\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"} {\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"tiger\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.body as Record<string, string>).hello); } else { context.log.info('No data received'); } return 'OK'; }",
"kn func invoke --target 'http://example.function.com' --data '{\"hello\": \"world\"}'",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"world\"}",
"export function handle(context: Context): string { // log the incoming request body's 'hello' parameter if (context.body) { context.log.info((context.headers as Record<string, string>)['custom-header']); } else { context.log.info('No data received'); } return 'OK'; }",
"curl -H'x-custom-header: some-value'' http://example.function.com",
"{\"level\":30,\"time\":1604511655265,\"pid\":3430203,\"hostname\":\"localhost.localdomain\",\"reqId\":1,\"msg\":\"some-value\"}",
"apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service spec: labels: app: <revision_name>",
"kn service create hello --image <service-image> --limit nvidia.com/gpu=1",
"kn service update hello --limit nvidia.com/gpu=3"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/serverless/index |
4.19. bind-dyndb-ldap | 4.19. bind-dyndb-ldap 4.19.1. RHSA-2012:0683 - Important: bind-dyndb-ldap security update An updated bind-dyndb-ldap package that fixes one security issue is now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The dynamic LDAP back end is a plug-in for BIND that provides back-end capabilities to LDAP databases. It features support for dynamic updates and internal caching that help to reduce the load on LDAP servers. Security Fix CVE-2012-2134 A flaw was found in the way bind-dyndb-ldap handled LDAP query errors. If a remote attacker were able to send DNS queries to a named server that is configured to use bind-dyndb-ldap, they could trigger such an error with a DNS query leveraging bind-dyndb-ldap's insufficient escaping of the LDAP base DN (distinguished name). This would result in an invalid LDAP query that named would retry in a loop, preventing it from responding to other DNS queries. With this update, bind-dyndb-ldap only attempts to retry one time when an LDAP search returns an unexpected error. Red Hat would like to thank Ronald van Zantvoort for reporting this issue. All bind-dyndb-ldap users should upgrade to this updated package, which contains a backported patch to correct this issue. For the update to take effect, the named service must be restarted. 4.19.2. RHBA-2011:1715 - bind-dyndb-ldap bug fix update An updated bind-dyndb-ldap package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The dynamic LDAP (Lightweight Directory Access Protocol) back end is a plug-in for BIND that provides an LDAP database back-end capabilities. It features support for dynamic updates and internal caching to lift the load off of the LDAP server. Bug Fixes BZ# 742368 Previously, the bind-dyndb-ldap plug-in could faile to honor the selected authentication method because it did not call the ldap_bind() function on reconnection. Consequently, the plug-in connected to the LDAP server anonymously. With this update, the ldap_bind() function is executed on reconnection and the plug-in uses the correct authentication method in the described scenario. BZ# 707255 The bind-dyndb-ldap plug-in failed to load new zones from the LDAP server runtime. This update adds the zone_refresh parameter to the plug-in which controls how often the zone check is performed. BZ# 745045 The bind-dyndb-ldap plug-in could fail to connect to the LDAP server. This happened when the LDAP server was using localhost and FreeIPA installation was using a name different from the machine hostname. This update adds to the plug-in the ldap_hostname option, which can be used to set the correct LDAP server hostname. BZ# 727856 The "named" process could have remained unresponsive due to a race condition in the bind-dyndb-ldap plug-in. With this update, the race condition has been resolved and the problem no longer occurs. All users of bind-dyndb-ldap are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/bind-dyndb-ldap |
function::ntohl | function::ntohl Name function::ntohl - Convert 32-bit long from network to host order Synopsis Arguments x Value to convert | [
"ntohl:long(x:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ntohl |
Chapter 1. Quay.io overview | Chapter 1. Quay.io overview Quay.io is a registry for storing, building, and distributing container images and other OCI artifacts. Quay.io has gained widespread popularity among developers, organizations, and enterprises for establishing itself as one of the leading platforms in the containerization ecosystem. It offers both free and paid tiers to cater to various user needs. At its core, Quay.io serves as a centralized repository for storing, managing, and distributing container images. Quay.io is flexible, easy to use, and offers an intuitive web interface that allows users to quickly upload and manage their container images. Developers can create private repositories, ensuring sensitive or proprietary code remains secure within their organization. Additionally, users can set up access controls and manage team collaboration, enabling seamless sharing of container images among designated team members. Quay.io addresses container security concerns through its integrated image scanner, Clair . The service automatically scans container images for known vulnerabilities and security issues, providing developers with valuable insights into potential risks and suggesting remediation steps. Quay.io excels in automation and supports integration with popular Continuous Integration/Continuous Deployment (CI/CD) tools and platforms, enabling seamless automation of the container build and deployment processes. As a result, developers can streamline their workflows, significantly reducing manual intervention and improving overall development efficiency. Quay.io caters to the needs of both large and small-scale deployments. Its architecture and support for high availability ensures that organizations can rely on it for mission-critical applications. The platform can handle significant container image traffic and offers efficient replication and distribution mechanisms to deliver container images to various geographical locations. Quay.io has established itself as an active hub for container enthusiasts. Developers can discover a vast collection of pre-built, public container images shared by other users, making it easier to find useful tools, applications, and services for their projects. This open sharing ecosystem fosters collaboration and accelerates software development within the container community. As containerization continues to gain momentum in the software development landscape, Quay.io remains at the forefront, continually improving and expanding its services. The platform's commitment to security, ease of use, automation, and community engagement has solidified its position as a preferred container registry service for both individual developers and large organizations alike. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/quayio-overview |
Network APIs | Network APIs OpenShift Container Platform 4.15 Reference guide for network APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_apis/index |
Serving | Serving Red Hat OpenShift Serverless 1.33 Getting started with Knative Serving and configuring services Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/index |
Chapter 1. Managed cluster klusterlet advanced configuration | Chapter 1. Managed cluster klusterlet advanced configuration With Red Hat Advanced Cluster Management for Kubernetes klusterlet add-ons, you can further configure your managed clusters to improve performance and add functionality to your applications. See the following enablement options: Enabling klusterlet add-ons on clusters for cluster management Configuring nodeSelectors and tolerations for klusterlet add-ons Enabling cluster-wide proxy on existing cluster add-ons 1.1. Enabling klusterlet add-ons on clusters for cluster management After you install Red Hat Advanced Cluster Management for Kubernetes and then create or import clusters with multicluster engine operator you can enable the klusterlet add-ons for those managed clusters. The klusterlet add-ons are not enabled by default if you created or imported clusters unless you create or import with the Red Hat Advanced Cluster Management console. See the following available klusterlet add-ons: application-manager cert-policy-controller config-policy-controller iam-policy-controller governance-policy-framework search-collector Complete the following steps to enable the klusterlet add-ons for the managed clusters after Red Hat Advanced Cluster Management is installed: Create a YAML file that is similar to the following KlusterletAddonConfig , with the spec value that represents the add-ons: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: 1 enabled: true searchCollector: enabled: true 1 The policy-controller add-on is divided into two add-ons: The governance-policy-framework and the config-policy-controller . As a result, the policyController controls the governance-policy-framework and the config-policy-controller managedClusterAddons . Save the file as klusterlet-addon-config.yaml . Apply the YAML by running the following command on the hub cluster: To verify whether the enabled managedClusterAddons are created after the KlusterletAddonConfig is created, run the following command: 1.2. Configuring nodeSelectors and tolerations for klusterlet add-ons In Red Hat Advanced Cluster Management, you can configure nodeSelector and tolerations for the following klusterlet add-ons: application-manager cert-policy-controller cluster-proxy config-policy-controller governance-policy-framework hypershift-addon iam-policy-controller managed-serviceaccount observability-controller search-collector submariner volsync work-manager Complete the following steps: Use the AddonDeploymentConfig API to create a configuration to specify the nodeSelector and tolerations on a certain namespace on the hub cluster. Create a file named addondeploymentconfig.yaml that is based on the following template: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: config-name 1 namespace: config-name-space 2 spec: nodePlacement: nodeSelector: node-selector 3 tolerations: tolerations 4 1 Replace config-name with the name of the AddonDeploymentConfig that you just created. 2 Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created. 3 Replace node-selector with your node selector. 4 Replace tolerations with your tolerations. A completed AddOnDeployment file might resemble the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: deploy-config namespace: open-cluster-management-hub spec: nodePlacement: nodeSelector: "node-dedicated": "acm-addon" tolerations: - effect: NoSchedule key: node-dedicated value: acm-addon operator: Equal Run the following command to apply the file that you created: Use the configuration that you created as the global default configuration for your add-on by running the following command: Replace addon-name with your add-on name. Replace config-name with the name of the AddonDeploymentConfig that you just created. Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created. The nodeSelector and tolerations that you specified are applied to all of your add-on on each of the managed clusters. You can also override the global default AddonDeploymentConfig configuration for your add-on on a certain managed cluster by using following steps: Use the AddonDeploymentConfig API to create another configuration to specify the nodeSelector and tolerations on the hub cluster. Link the new configuration that you created to your add-on ManagedClusterAddon on a managed cluster. Replace managed-cluster with your managed cluster name Replace addon-name with your add-on name Replace config-namespace with the namespace of the AddonDeploymentConfig that you just created Replace config-name with the name of the AddonDeploymentConfig that you just created The new configuration that you referenced in the add-on ManagedClusterAddon overrides the global default configuration that you previously defined in the ClusterManagementAddon add-on. 1.3. Enabling cluster-wide proxy on existing cluster add-ons You can configure the KlusterletAddonConfig in the cluster namespace to add the proxy environment variables to all the klusterlet add-on pods of the managed Red Hat OpenShift Container Platform clusters. Complete the following steps to configure the KlusterletAddonConfig to add the three environment variables to the pods of the klusterlet add-ons: Edit the KlusterletAddonConfig file that is in the namespace of the cluster that needs the proxy. You can use the console to find the resource, or you can edit from the terminal with the following command: Note: If you are working with only one cluster, you do not need <my-cluster-name> at the end of your command. See the following command: Edit the .spec.proxyConfig section of the file so it resembles the following example. The spec.proxyConfig is an optional section: spec proxyConfig: httpProxy: "<proxy_not_secure>" 1 httpsProxy: "<proxy_secure>" 2 noProxy: "<no_proxy>" 3 1 Replace proxy_not_secure with the address of the proxy server for http requests. For example, use http://192.168.123.145:3128 . 2 Replace proxy_secure with the address of the proxy server for https requests. For example, use https://192.168.123.145:3128 . 3 Replace no_proxy with a comma delimited list of IP addresses, hostnames, and domain names where traffic is not routed through the proxy. For example, use .cluster.local,.svc,10.128.0.0/14,example.com . If the OpenShift Container Platform cluster is created with cluster wide proxy configured on the hub cluster, the cluster wide proxy configuration values are added to the pods of the klusterlet add-ons as environment variables when the following conditions are met: The .spec.policyController.proxyPolicy in the addon section is enabled and set to OCPGlobalProxy . The .spec.applicationManager.proxyPolicy is enabled and set to CustomProxy . Note: The default value of proxyPolicy in the addon section is Disabled . See the following examples of proxyPolicy entries: apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: clusterName namespace: clusterName spec: proxyConfig: httpProxy: http://pxuser:[email protected]:3128 httpsProxy: http://pxuser:[email protected]:3128 noProxy: .cluster.local,.svc,10.128.0.0/14, example.com applicationManager: enabled: true proxyPolicy: CustomProxy policyController: enabled: true proxyPolicy: OCPGlobalProxy searchCollector: enabled: true proxyPolicy: Disabled certPolicyController: enabled: true proxyPolicy: Disabled Important: Global proxy settings do not impact alert forwarding. To set up alert forwarding for Red Hat Advanced Cluster Management hub clusters with a cluster-wide proxy, see Forwarding alerts for more details. | [
"apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: <cluster_name> namespace: <cluster_name> spec: applicationManager: enabled: true certPolicyController: enabled: true policyController: 1 enabled: true searchCollector: enabled: true",
"apply -f klusterlet-addon-config.yaml",
"get managedclusteraddons -n <cluster namespace>",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: config-name 1 namespace: config-name-space 2 spec: nodePlacement: nodeSelector: node-selector 3 tolerations: tolerations 4",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: AddOnDeploymentConfig metadata: name: deploy-config namespace: open-cluster-management-hub spec: nodePlacement: nodeSelector: \"node-dedicated\": \"acm-addon\" tolerations: - effect: NoSchedule key: node-dedicated value: acm-addon operator: Equal",
"apply -f addondeploymentconfig",
"patch clustermanagementaddons <addon-name> --type='json' -p='[{\"op\":\"add\", \"path\":\"/spec/supportedConfigs\", \"value\":[{\"group\":\"addon.open-cluster-management.io\",\"resource\":\"addondeploymentconfigs\", \"defaultConfig\":{\"name\":\"deploy-config\",\"namespace\":\"open-cluster-management-hub\"}}]}]'",
"-n <managed-cluster> patch managedclusteraddons <addon-name> --type='json' -p='[{\"op\":\"add\", \"path\":\"/spec/configs\", \"value\":[ {\"group\":\"addon.open-cluster-management.io\",\"resource\":\"addondeploymentconfigs\",\"namespace\":\"<config-namespace>\",\"name\":\"<config-name>\"} ]}]'",
"-n <my-cluster-name> edit klusterletaddonconfig <my-cluster-name>",
"-n <my-cluster-name> edit klusterletaddonconfig",
"spec proxyConfig: httpProxy: \"<proxy_not_secure>\" 1 httpsProxy: \"<proxy_secure>\" 2 noProxy: \"<no_proxy>\" 3",
"apiVersion: agent.open-cluster-management.io/v1 kind: KlusterletAddonConfig metadata: name: clusterName namespace: clusterName spec: proxyConfig: httpProxy: http://pxuser:[email protected]:3128 httpsProxy: http://pxuser:[email protected]:3128 noProxy: .cluster.local,.svc,10.128.0.0/14, example.com applicationManager: enabled: true proxyPolicy: CustomProxy policyController: enabled: true proxyPolicy: OCPGlobalProxy searchCollector: enabled: true proxyPolicy: Disabled certPolicyController: enabled: true proxyPolicy: Disabled"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/klusterlet_add-ons/acm-managed-adv-config |
Post-installation configuration | Post-installation configuration OpenShift Container Platform 4.7 Day 2 operations for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}",
"oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}",
"oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF",
"ingresscontroller.operator.openshift.io \"default\" deleted ingresscontroller.operator.openshift.io/default replaced",
"oc get machine -n openshift-machine-api",
"NAME STATE TYPE REGION ZONE AGE lk4pj-master-0 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-master-1 running m4.xlarge us-east-1 us-east-1b 17m lk4pj-master-2 running m4.xlarge us-east-1 us-east-1a 17m lk4pj-worker-us-east-1a-5fzfj running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1a-vbghs running m4.xlarge us-east-1 us-east-1a 15m lk4pj-worker-us-east-1b-zgpzg running m4.xlarge us-east-1 us-east-1b 15m",
"oc edit machines -n openshift-machine-api <master_name> 1",
"spec: providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-f4b64... False True False 3 2 2 0 4h42m",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-06c9c4... True False False 3 3 3 0 4h42m worker rendered-worker-c1b41a... False True False 3 2 3 0 4h42m",
"oc describe mcp worker",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3 Events: <none>",
"Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 2 Ready Machine Count: 2 Unavailable Machine Count: 1 Updated Machine Count: 3",
"oc get machineconfigs",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 00-worker 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-container-runtime 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m 01-master-kubelet 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-master-dde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m rendered-worker-fde... 2c9371fbb673b97a6fe8b1c52... 3.2.0 5h18m",
"oc describe machineconfigs 01-master-kubelet",
"Name: 01-master-kubelet Spec: Config: Ignition: Version: 3.2.0 Storage: Files: Contents: Source: data:, Mode: 420 Overwrite: true Path: /etc/kubernetes/cloud.conf Contents: Source: data:,kind%3A%20KubeletConfiguration%0AapiVersion%3A%20kubelet.config.k8s.io%2Fv1beta1%0Aauthentication%3A%0A%20%20x509%3A%0A%20%20%20%20clientCAFile%3A%20%2Fetc%2Fkubernetes%2Fkubelet-ca.crt%0A%20%20anonymous Mode: 420 Overwrite: true Path: /etc/kubernetes/kubelet.conf Systemd: Units: Contents: [Unit] Description=Kubernetes Kubelet Wants=rpc-statd.service network-online.target crio.service After=network-online.target crio.service ExecStart=/usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf \\",
"oc delete -f ./myconfig.yaml",
"cat << EOF | base64 pool 0.rhel.pool.ntp.org iburst 1 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony EOF",
"ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGli L2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAv dmFyL2xvZy9jaHJvbnkK",
"cat << EOF > ./99-masters-chrony-configuration.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-masters-chrony-configuration spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.2.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,ICAgIHNlcnZlciBjbG9jay5yZWRoYXQuY29tIGlidXJzdAogICAgZHJpZnRmaWxlIC92YXIvbGliL2Nocm9ueS9kcmlmdAogICAgbWFrZXN0ZXAgMS4wIDMKICAgIHJ0Y3N5bmMKICAgIGxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK mode: 420 1 overwrite: true path: /etc/chrony.conf osImageURL: \"\" EOF",
"oc apply -f ./99-masters-chrony-configuration.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: config: ignition: version: 3.2.0 kernelArguments: - enforcing=0 3",
"oc create -f 05-worker-kernelarg-selinuxpermissive.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - rd.multipath=default - root=/dev/disk/by-label/dm-mpath-root",
"oc create -f ./99-master-kargs-mpath.yaml",
"oc get MachineConfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-kargs-mpath 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.20.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.20.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.20.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.20.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.20.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.20.0",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"cat << EOF > 99-worker-realtime.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-realtime spec: kernelType: realtime EOF",
"oc create -f 99-worker-realtime.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-143-147.us-east-2.compute.internal Ready worker 103m v1.20.0 ip-10-0-146-92.us-east-2.compute.internal Ready worker 101m v1.20.0 ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.20.0",
"oc debug node/ip-10-0-143-147.us-east-2.compute.internal",
"Starting pod/ip-10-0-143-147us-east-2computeinternal-debug To use host binaries, run `chroot /host` sh-4.4# uname -a Linux <worker_node> 4.18.0-147.3.1.rt24.96.el8_1.x86_64 #1 SMP PREEMPT RT Wed Nov 27 18:29:55 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"oc delete -f 99-worker-realtime.yaml",
"cat > /tmp/jrnl.conf <<EOF Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s EOF",
"export jrnl_cnf=USD( cat /tmp/jrnl.conf | base64 -w0 ) echo USDjrnl_cnf IyBEaXNhYmxlIHJhdGUgbGltaXRpbmcKUmF0ZUxpbWl0SW50ZXJ2YWw9MXMKUmF0ZUxpbWl0QnVyc3Q9MTAwMDAKU3RvcmFnZT12b2xhdGlsZQpDb21wcmVzcz1ubwpNYXhSZXRlbnRpb25TZWM9MzBzCg==",
"cat > /tmp/40-worker-custom-journald.yaml <<EOF apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 40-worker-custom-journald spec: config: ignition: config: {} security: tls: {} timeouts: {} version: 3.1.0 networkd: {} passwd: {} storage: files: - contents: source: data:text/plain;charset=utf-8;base64,USD{jrnl_cnf} verification: {} filesystem: root mode: 420 path: /etc/systemd/journald.conf systemd: {} osImageURL: \"\" EOF",
"oc apply -f /tmp/40-worker-custom-journald.yaml",
"oc get machineconfigpool NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker ip-10-0-0-1.us-east-2.compute.internal Ready worker 39m v0.0.0-master+USDFormat:%hUSD oc debug node/ip-10-0-0-1.us-east-2.compute.internal Starting pod/ip-10-0-141-142us-east-2computeinternal-debug sh-4.2# chroot /host sh-4.4# cat /etc/systemd/journald.conf Disable rate limiting RateLimitInterval=1s RateLimitBurst=10000 Storage=volatile Compress=no MaxRetentionSec=30s sh-4.4# exit",
"cat << EOF > 80-extensions.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 80-worker-extensions spec: config: ignition: version: 3.2.0 extensions: - usbguard EOF",
"oc create -f 80-extensions.yaml",
"oc get machineconfig 80-worker-extensions",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 80-worker-extensions 3.2.0 57s",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-35 True False False 3 3 3 0 34m worker rendered-worker-d8 False True False 3 1 1 0 34m",
"oc get node | grep worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-169-2.us-east-2.compute.internal Ready worker 102m v1.18.3",
"oc debug node/ip-10-0-169-2.us-east-2.compute.internal",
"To use host binaries, run `chroot /host` sh-4.4# chroot /host sh-4.4# rpm -q usbguard usbguard-0.7.4-4.el8.x86_64.rpm",
"variant: openshift version: 4.9.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-worker-firmware-blob storage: files: - path: /var/lib/firmware/<package_name> 1 contents: local: <package_name> 2 mode: 0644 3 openshift: kernel_arguments: - 'firmware_class.path=/var/lib/firmware' 4",
"butane 98-worker-firmware-blob.bu -o 98-worker-firmware-blob.yaml --files-dir <directory_including_package_name>",
"oc apply -f 98-worker-firmware-blob.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc get ctrcfg",
"NAME AGE ctr-pid 24m ctr-overlay 15m ctr-level 5m45s",
"oc get mc | grep container",
"01-master-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 01-worker-container-runtime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 57m 99-worker-generated-containerruntime b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m 99-worker-generated-containerruntime-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 17m 99-worker-generated-containerruntime-2 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 7m26s",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: pidsLimit: 2048 2 logLevel: debug 3 overlaySize: 8G 4 logSizeMax: \"-1\" 5",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' 1 containerRuntimeConfig: 2 pidsLimit: 2048 logLevel: debug overlaySize: 8G logSizeMax: \"-1\"",
"oc create -f <file_name>.yaml",
"oc get ContainerRuntimeConfig",
"NAME AGE overlay-size 3m19s",
"oc get machineconfigs | grep containerrun",
"99-worker-generated-containerruntime 2c9371fbb673b97a6fe8b1c52691999ed3a1bfc2 3.2.0 31s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-169 False True False 3 1 1 0 9h",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# crio config | egrep 'log_level|pids_limit|log_size_max'",
"pids_limit = 2048 log_size_max = -1 log_level = \"debug\"",
"sh-4.4# head -n 7 /etc/containers/storage.conf",
"[storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: overlay-size spec: machineConfigPoolSelector: matchLabels: custom-crio: overlay-size containerRuntimeConfig: pidsLimit: 2048 logLevel: debug overlaySize: 8G",
"oc apply -f overlaysize.yml",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2020-07-09T15:46:34Z\" generation: 3 labels: custom-crio: overlay-size machineconfiguration.openshift.io/mco-built-in: \"\"",
"oc get machineconfigs",
"99-worker-generated-containerruntime 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m42s rendered-worker-xyz 4173030d89fbf4a7a0976d1665491a4d9a6e54f1 3.2.0 7m36s",
"oc get mcp worker",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz False True False 3 2 2 0 20h",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-xyz True False False 3 3 3 0 20h",
"head -n 7 /etc/containers/storage.conf [storage] driver = \"overlay\" runroot = \"/var/run/containers/storage\" graphroot = \"/var/lib/containers/storage\" [storage.options] additionalimagestores = [] size = \"8G\"",
"~ USD df -h Filesystem Size Used Available Use% Mounted on overlay 8.0G 8.0K 8.0G 0% /",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc adm cordon <node_name> oc adm drain <node_name>",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc get machines",
"spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false policy: name: \"\"",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.18.3+002a51f",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.18.3+002a51f",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"template: metadata: labels: machine.openshift.io/cluster-api-cluster: agl030519-vplxk 1 machine.openshift.io/cluster-api-machine-role: worker 2 machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule",
"tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\"",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.20.0",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: node-role.kubernetes.io/infra: \"\"",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"oc create -f cluster-monitoring-configmap.yaml",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.20.0 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.20.0 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.20.0",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15",
"oc create -f <filename>.yaml 1",
"apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6",
"oc create -f <filename>.yaml 1",
"oc edit featuregate cluster",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: IPv6DualStackNoUpgrade 2",
"oc debug node/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/kubernetes/kubelet.conf",
"featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false",
"oc edit apiserver",
"spec: encryption: type: aescbc 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: routes.route.openshift.io",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: secrets, configmaps",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"EncryptionCompleted All resources encrypted: oauthaccesstokens.oauth.openshift.io, oauthauthorizetokens.oauth.openshift.io",
"oc edit apiserver",
"spec: encryption: type: identity 1",
"oc get openshiftapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc get authentication.operator.openshift.io -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"Encrypted\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"DecryptionCompleted Encryption mode set to identity and everything is decrypted",
"oc debug node/<node_name>",
"sh-4.2# chroot /host",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"oc get pods -n openshift-etcd -o wide | grep -v quorum-guard | grep etcd",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | grep -v operator",
"sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | grep -v operator",
"sudo mv /var/lib/etcd/ /tmp",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.23.3+e419edf host-172-25-75-38 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-40 Ready master 3d20h v1.23.3+e419edf host-172-25-75-65 Ready master 3d20h v1.23.3+e419edf host-172-25-75-74 Ready infra,worker 3d20h v1.23.3+e419edf host-172-25-75-79 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-86 Ready worker 3d20h v1.23.3+e419edf host-172-25-75-98 Ready infra,worker 3d20h v1.23.3+e419edf",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | grep -v operator",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd",
"Unable to connect to the server: EOF",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc get machine clustername-8qw5l-master-0 \\ 1 -n openshift-machine-api -o yaml > new-master-machine.yaml",
"status: addresses: - address: 10.0.131.183 type: InternalIP - address: ip-10-0-131-183.ec2.internal type: InternalDNS - address: ip-10-0-131-183.ec2.internal type: Hostname lastUpdated: \"2020-04-20T17:44:29Z\" nodeRef: kind: Node name: ip-10-0-131-183.ec2.internal uid: acca4411-af0d-4387-b73e-52b2484295ad phase: Running providerStatus: apiVersion: awsproviderconfig.openshift.io/v1beta1 conditions: - lastProbeTime: \"2020-04-20T16:53:50Z\" lastTransitionTime: \"2020-04-20T16:53:50Z\" message: machine successfully created reason: MachineCreationSucceeded status: \"True\" type: MachineCreation instanceId: i-0fdb85790d76d0c3f instanceState: stopped kind: AWSMachineProviderStatus",
"apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: clustername-8qw5l-master-3",
"providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f",
"annotations: machine.openshift.io/instance-state: running generation: 2",
"resourceVersion: \"13291\" uid: a282eb70-40a2-4e89-8009-d05dd420d31a",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc apply -f new-master-machine.yaml",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc login -u <cluster_admin> 1",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"oc get poddisruptionbudget --all-namespaces",
"NAMESPACE NAME MIN-AVAILABLE SELECTOR another-project another-pdb 4 bar=foo test-project my-pdb 2 foo=bar",
"apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: foo: bar",
"apiVersion: policy/v1beta1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: foo: bar",
"oc create -f </path/to/file> -n <project_name>",
"oc -n openshift-cloud-credential-operator get CredentialsRequest -o json | jq -r '.items[] | select (.spec.providerSpec.kind==\"<provider_spec>\") | .spec.secretRef'",
"{ \"name\": \"ebs-cloud-credentials\", \"namespace\": \"openshift-cluster-csi-drivers\" } { \"name\": \"cloud-credential-operator-iam-ro-creds\", \"namespace\": \"openshift-cloud-credential-operator\" }",
"oc delete secret <secret_name> \\ 1 -n <secret_namespace> 2",
"oc delete secret ebs-cloud-credentials -n openshift-cluster-csi-drivers",
"oc get is <imagestream> -n openshift -o json | jq .spec.tags[].from.name | grep registry.redhat.io",
"oc image mirror registry.redhat.io/rhscl/ruby-25-rhel7:latest USD{MIRROR_ADDR}/rhscl/ruby-25-rhel7:latest",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc edit configs.samples.operator.openshift.io -n openshift-cluster-samples-operator",
"oc create configmap registry-config --from-file=USD{MIRROR_ADDR_HOSTNAME}..5000=USDpath/ca.crt -n openshift-config",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-config\"}}}' --type=merge",
"oc import-image is/must-gather -n openshift",
"oc adm must-gather --image=USD(oc adm release info --image-for must-gather)",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc adm cordon <node_name> oc adm drain <node_name>",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc get machines",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 .",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\"",
"spec: . template: . spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300",
"spec: . template: . spec: tolerations: - operator: \"Exists\"",
"spec: . template: . spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: . template: . spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit machineset <machineset>",
"spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: . labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.7\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"vm.overcommit_memory = 1",
"sysctl -a |grep panic",
"vm.panic_on_oom = 0",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - \"false\"",
"sysctl -w vm.overcommit_memory=0",
"quota.openshift.io/cluster-resource-override-enabled: \"false\"",
"oc create -f <file-name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: custom-kubelet=small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10",
"oc create -f <file-name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: \"openshift\" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: \"openshift-control-plane\" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # \"cache hot\" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: \"openshift-node\" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}",
"oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched",
"oc get dnses.config.openshift.io/cluster -o yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}",
"oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'",
"network.config.openshift.io/cluster patched",
"oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'",
"\"service-node-port-range\":[\"30000-33000\"]",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017",
"touch <policy_name>.yaml",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []",
"kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}",
"oc apply -f <policy_name>.yaml -n <namespace>",
"networkpolicy \"default-deny\" created",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF",
"cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF",
"oc describe networkpolicy",
"Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress",
"oc adm create-bootstrap-project-template -o yaml > template.yaml",
"oc create -f template.yaml -n openshift-config",
"oc edit project.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>",
"oc edit template <project_template> -n openshift-config",
"objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"oc new-project <project> 1",
"oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s",
"openstack port show <cluster_name>-<cluster_ID>-ingress-port",
"openstack floating ip set --port <ingress_port_ID> <apps_FIP>",
"*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>",
"<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>",
"oc edit networks.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4",
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: gp2 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: gold provisioner: kubernetes.io/cinder parameters: type: fast 1 availability: nova 2 fsType: ext4 3",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 1 iopsPerGB: \"10\" 2 encrypted: \"true\" 3 kmsKeyId: keyvalue 4 fsType: ext4 5",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-premium provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 1 allowVolumeExpansion: true parameters: kind: Managed 2 storageaccounttype: Premium_LRS 3 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role>",
"system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: standard provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 1 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: slow provisioner: kubernetes.io/vsphere-volume 1 parameters: diskformat: thin 2",
"oc get storageclass",
"NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs",
"apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: identityProviders: - name: my_identity_provider 1 mappingMethod: claim 2 type: HTPasswd htpasswd: fileData: name: htpass-secret 3",
"oc describe clusterrole.rbac",
"Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]",
"oc describe clusterrolebinding.rbac",
"Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api",
"oc describe rolebinding.rbac",
"oc describe rolebinding.rbac -n joe-project",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project",
"oc adm policy add-role-to-user <role> <user> -n <project>",
"oc adm policy add-role-to-user admin alice -n joe",
"oc describe rolebinding.rbac -n <project>",
"oc describe rolebinding.rbac -n joe",
"Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe",
"oc create role <name> --verb=<verb> --resource=<resource> -n <project>",
"oc create role podview --verb=get --resource=pod -n blue",
"oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue",
"oc create clusterrole <name> --verb=<verb> --resource=<resource>",
"oc create clusterrole podviewonly --verb=get --resource=pod",
"oc adm policy add-cluster-role-to-user cluster-admin <user>",
"INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>",
"oc delete secrets kubeadmin -n kube-system",
"oc edit image.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Image 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 1 name: cluster resourceVersion: \"8302\" selfLink: /apis/config.openshift.io/v1/images/cluster uid: e34555da-78a9-11e9-b92b-06d6c7da38dc spec: allowedRegistriesForImport: 2 - domainName: quay.io insecure: false additionalTrustedCA: 3 name: myconfigmap registrySources: 4 allowedRegistries: - example.com - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 insecureRegistries: - insecure.com status: internalRegistryHostname: image-registry.openshift-image-registry.svc:5000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-j5cd0qt-f76d1-vfj5x-master-0 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-1 Ready,SchedulingDisabled master 99m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-master-2 Ready master 98m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-b-nsnd4 Ready worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-c-5z2gz NotReady,SchedulingDisabled worker 90m v1.19.0+7070803 ci-ln-j5cd0qt-f76d1-vfj5x-worker-d-stsjv Ready worker 90m v1.19.0+7070803",
"apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----",
"oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config",
"oc edit image.config.openshift.io cluster",
"spec: additionalTrustedCA: name: registry-config",
"skopeo copy docker://registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6 docker://example.io/example/ubi-minimal",
"apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: name: ubi8repo spec: repositoryDigestMirrors: - mirrors: - example.io/example/ubi-minimal 1 source: registry.access.redhat.com/ubi8/ubi-minimal 2 - mirrors: - example.com/example/ubi-minimal source: registry.access.redhat.com/ubi8/ubi-minimal - mirrors: - mirror.example.com/redhat source: registry.redhat.io/openshift4 3",
"oc create -f registryrepomirror.yaml",
"oc get node",
"NAME STATUS ROLES AGE VERSION ip-10-0-137-44.ec2.internal Ready worker 7m v1.20.0 ip-10-0-138-148.ec2.internal Ready master 11m v1.20.0 ip-10-0-139-122.ec2.internal Ready master 11m v1.20.0 ip-10-0-147-35.ec2.internal Ready,SchedulingDisabled worker 7m v1.20.0 ip-10-0-153-12.ec2.internal Ready worker 7m v1.20.0 ip-10-0-154-10.ec2.internal Ready master 11m v1.20.0",
"oc debug node/ip-10-0-147-35.ec2.internal",
"Starting pod/ip-10-0-147-35ec2internal-debug To use host binaries, run `chroot /host`",
"sh-4.2# chroot /host",
"sh-4.2# cat /etc/containers/registries.conf",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = \"registry.access.redhat.com/ubi8/\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"example.io/example/ubi8-minimal\" insecure = false [[registry.mirror]] location = \"example.com/example/ubi8-minimal\" insecure = false",
"sh-4.2# podman pull --log-level=debug registry.access.redhat.com/ubi8/ubi-minimal@sha256:5cfbaf45ca96806917830c183e9f37df2e913b187adb32e89fd83fa455ebaa6",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"oc apply -f sub.yaml"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/post-installation_configuration/index |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_cloud-init_for_rhel_8/proc_providing-feedback-on-red-hat-documentation_cloud-content |
Part I. Designing a decision service using DMN models | Part I. Designing a decision service using DMN models As a business analyst or business rules developer, you can use Decision Model and Notation (DMN) to model a decision service graphically. The decision requirements of a DMN decision model are determined by a decision requirements graph (DRG) that is depicted in one or more decision requirements diagrams (DRDs). A DRD can represent part or all of the overall DRG for the DMN model. DRDs trace business decisions from start to finish, with each decision node using logic defined in DMN boxed expressions such as decision tables. Red Hat Process Automation Manager provides runtime support for DMN 1.1, 1.2, 1.3, and 1.4 models at conformance level 3, and design support for DMN 1.2 models at conformance level 3. You can design your DMN models directly in Business Central or with the Red Hat Process Automation Manager DMN modeler in VS Code, or import existing DMN models into your Red Hat Process Automation Manager projects for deployment and execution. Any DMN 1.1 and 1.3 models (do not contain DMN 1.3 features) that you import into Business Central, open in the DMN designer, and save are converted to DMN 1.2 models. For more information about DMN, see the Object Management Group (OMG) Decision Model and Notation specification . For a step-by-step tutorial with an example DMN decision service, see Getting started with decision services . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/assembly-dmn-models |
Chapter 6. Configuring the Red Hat Build of OptaPlanner solver | Chapter 6. Configuring the Red Hat Build of OptaPlanner solver You can use the following methods to configure your OptaPlanner solver: Use an XML file. Use the SolverConfig API. Add class annotations and JavaBean property annotations on the domain model. Control the method that OptaPlanner uses to access your domain. Define custom properties. 6.1. Using an XML file to configure the OptaPlanner solver Each example project has a solver configuration file that you can edit. The <EXAMPLE>SolverConfig.xml file is located in the org.optaplanner.optaplanner-8.38.0.Final-redhat-00004/optaplanner-examples/src/main/resources/org/optaplanner/examples/<EXAMPLE> directory, where <EXAMPLE> is the name of the OptaPlanner example project. Alternatively, you can create a SolverFactory from a file with SolverFactory.createFromXmlFile() . However, for portability reasons, a classpath resource is recommended. Both a Solver and a SolverFactory have a generic type called Solution_ , which is the class representing a planning problem and solution. OptaPlanner makes it relatively easy to switch optimization algorithms by changing the configuration. Procedure Build a Solver instance with the SolverFactory . Configure the solver configuration XML file: Define the model. Define the score function. Optional: Configure the optimization algorithm. The following example is a solver XML file for the NQueens problem: <?xml version="1.0" encoding="UTF-8"?> <solver xmlns="https://www.optaplanner.org/xsd/solver" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://www.optaplanner.org/xsd/solver https://www.optaplanner.org/xsd/solver/solver.xsd"> <!-- Define the model --> <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> <!-- Define the score function --> <scoreDirectorFactory> <scoreDrl>org/optaplanner/examples/nqueens/optional/nQueensConstraints.drl</scoreDrl> </scoreDirectorFactory> <!-- Configure the optimization algorithms (optional) --> <termination> ... </termination> <constructionHeuristic> ... </constructionHeuristic> <localSearch> ... </localSearch> </solver> Note On some environments, for example OSGi and JBoss modules, classpath resources such as the solver config, score DRLs, and domain classe) in your JAR files might not be available to the default ClassLoader of the optaplanner-core JAR file. In those cases, provide the ClassLoader of your classes as a parameter: SolverFactory<NQueens> solverFactory = SolverFactory.createFromXmlResource( ".../nqueensSolverConfig.xml", getClass().getClassLoader()); Configure the SolverFactory with a solver configuration XML file, provided as a classpath resource as defined by ClassLoader.getResource() : SolverFasctory<NQueens> solverFactory = SolverFactory.createFromXmlResource( "org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml"); Solver<NQueens> solver = solverFactory.buildSolver(); 6.2. Using the Java API to configure the OptaPlanner solver You can configure a solver by using the SolverConfig API. This is especially useful to change values dynamically at runtime. The following example changes the running time based on system properties before building the Solver in the NQueens project: SolverConfig solverConfig = SolverConfig.createFromXmlResource( "org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml"); solverConfig.withTerminationConfig(new TerminationConfig() .withMinutesSpentLimit(userInput)); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); Every element in the solver configuration XML file is available as a Config class or a property on a Config class in the package namespace org.optaplanner.core.config . These Config classes are the Java representation of the XML format. They build the runtime components of the package namespace org.optaplanner.core.impl and assemble them into an efficient Solver . Note To configure a SolverFactory dynamically for each user request, build a template SolverConfig during initialization and copy it with the copy constructor for each user request. The following example shows how to do this with the NQueens problem: private SolverConfig template; public void init() { template = SolverConfig.createFromXmlResource( "org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml"); template.setTerminationConfig(new TerminationConfig()); } // Called concurrently from different threads public void userRequest(..., long userInput) { SolverConfig solverConfig = new SolverConfig(template); // Copy it solverConfig.getTerminationConfig().setMinutesSpentLimit(userInput); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); ... } 6.3. OptaPlanner annotation You must specify which classes in your domain model are planning entities, which properties are planning variables, and so on. Use one of the following methods to add annotations to your OptaPlanner project: Add class annotations and JavaBean property annotations on the domain model. The property annotations must be on the getter method, not on the setter method. Annotated getter methods do not need to be public. This is the recommended method. Add class annotations and field annotations on the domain model. Annotated fields do not need to be public. 6.4. Specifying OptaPlanner domain access By default, OptaPlanner accesses your domain using reflection. Reflection is reliable but slow compared to direct access. Alternatively, you can configure OptaPlanner to access your domain using Gizmo, which will generate bytecode that directly accesses the fields and methods of your domain without reflection. However, this method has the following restrictions: The planning annotations can only be on public fields and public getters. io.quarkus.gizmo:gizmo must be on the classpath. Note These restrictions do not apply when you use OptaPlanner with Quarkus because Gizmo is the default domain access type. Procedure To use Gizmo outside of Quarkus, set the domainAccessType in the solver configuration: <solver> <domainAccessType>GIZMO</domainAccessType> </solver> 6.5. Configuring custom properties In your OptaPlanner projects, you can add custom properties to solver configuration elements that instantiate classes and have documents that explicitly mention custom properties. Prerequisites You have a solver. Procedure Add a custom property. For example, if your EasyScoreCalculator has heavy calculations which are cached and you want to increase the cache size in one benchmark add the myCacheSize property: <scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name="myCacheSize" value="1000"/><!-- Override value --> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory> Add a public setter for each custom property, which is called when a Solver is built. public class MyEasyScoreCalculator extends EasyScoreCalculator<MySolution, SimpleScore> { private int myCacheSize = 500; // Default value @SuppressWarnings("unused") public void setMyCacheSize(int myCacheSize) { this.myCacheSize = myCacheSize; } ... } Most value types are supported, including boolean , int , double , BigDecimal , String and enums . | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <solver xmlns=\"https://www.optaplanner.org/xsd/solver\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"https://www.optaplanner.org/xsd/solver https://www.optaplanner.org/xsd/solver/solver.xsd\"> <!-- Define the model --> <solutionClass>org.optaplanner.examples.nqueens.domain.NQueens</solutionClass> <entityClass>org.optaplanner.examples.nqueens.domain.Queen</entityClass> <!-- Define the score function --> <scoreDirectorFactory> <scoreDrl>org/optaplanner/examples/nqueens/optional/nQueensConstraints.drl</scoreDrl> </scoreDirectorFactory> <!-- Configure the optimization algorithms (optional) --> <termination> </termination> <constructionHeuristic> </constructionHeuristic> <localSearch> </localSearch> </solver>",
"SolverFactory<NQueens> solverFactory = SolverFactory.createFromXmlResource( \".../nqueensSolverConfig.xml\", getClass().getClassLoader());",
"SolverFasctory<NQueens> solverFactory = SolverFactory.createFromXmlResource( \"org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml\"); Solver<NQueens> solver = solverFactory.buildSolver();",
"SolverConfig solverConfig = SolverConfig.createFromXmlResource( \"org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml\"); solverConfig.withTerminationConfig(new TerminationConfig() .withMinutesSpentLimit(userInput)); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver();",
"private SolverConfig template; public void init() { template = SolverConfig.createFromXmlResource( \"org/optaplanner/examples/nqueens/optional/nqueensSolverConfig.xml\"); template.setTerminationConfig(new TerminationConfig()); } // Called concurrently from different threads public void userRequest(..., long userInput) { SolverConfig solverConfig = new SolverConfig(template); // Copy it solverConfig.getTerminationConfig().setMinutesSpentLimit(userInput); SolverFactory<NQueens> solverFactory = SolverFactory.create(solverConfig); Solver<NQueens> solver = solverFactory.buildSolver(); }",
"<solver> <domainAccessType>GIZMO</domainAccessType> </solver>",
"<scoreDirectorFactory> <easyScoreCalculatorClass>...MyEasyScoreCalculator</easyScoreCalculatorClass> <easyScoreCalculatorCustomProperties> <property name=\"myCacheSize\" value=\"1000\"/><!-- Override value --> </easyScoreCalculatorCustomProperties> </scoreDirectorFactory>",
"public class MyEasyScoreCalculator extends EasyScoreCalculator<MySolution, SimpleScore> { private int myCacheSize = 500; // Default value @SuppressWarnings(\"unused\") public void setMyCacheSize(int myCacheSize) { this.myCacheSize = myCacheSize; } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_optaplanner/8.38/html/developing_solvers_with_red_hat_build_of_optaplanner/configuring-planner-proc_optaplanner-solver |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.