title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
2.5. Testing the Resource Configuration | 2.5. Testing the Resource Configuration In the cluster status display shown in Section 2.4, "Creating the Resources and Resource Groups with the pcs Command" , all of the resources are running on node z1.example.com . You can test whether the resource group fails over to node z2.example.com by using the following procedure to put the first node in standby mode, after which the node will no longer be able to host resources. The following command puts node z1.example.com in standby mode. After putting node z1 in standby mode, check the cluster status. Note that the resources should now all be running on z2 . The web site at the defined IP address should still display, without interruption. To remove z1 from standby mode, enter the following command. Note Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information on the resource-stickiness meta attribute, see Configuring a Resource to Prefer its Current Node in the Red Hat High Availability Add-On Reference . | [
"root@z1 ~]# pcs node standby z1.example.com",
"pcs status Cluster name: my_cluster Last updated: Wed Jul 31 17:16:17 2013 Last change: Wed Jul 31 17:18:34 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Node z1.example.com (1): standby Online: [ z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z2.example.com my_fs (ocf::heartbeat:Filesystem): Started z2.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z2.example.com Website (ocf::heartbeat:apache): Started z2.example.com",
"root@z1 ~]# pcs node unstandby z1.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-unittest-haaa |
Chapter 2. Enable the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions | Chapter 2. Enable the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions For running SAP NetWeaver application servers the RHEL for SAP Applications subscription can be used if the RHEL systems don't need to be locked to a specific RHEL 9 minor release. For running SAP HANA or SAP NetWeaver or S/4HANA application servers that should be tied to the same RHEL 9 minor release as SAP HANA, one of the following subscriptions is required to access Update Services for SAP Solutions (E4S) : for the x86_64 platform: Red Hat Enterprise Linux for SAP Solutions for the PowerPC Little Endian ( ppc64le ) platform: Red Hat Enterprise Linux for SAP Solutions for Power, LE 2.1. Detach existing subscriptions (already registered systems only) Perform the following steps if the SAP System was previously registered using another RHEL subscription. Find the serial number of the subscription that the system is currently subscribed to: # subscription-manager list --consumed | \ awk '/Subscription Name:/|| /Serial:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf ("%s\n\n", USD0)}' Remove the subscription from the system, using the following command. Replace the string <SERIAL> by the serial number shown in the output of the command. # subscription-manager remove --serial=<SERIAL> 2.2. Attach the RHEL for SAP Applications or RHEL for SAP Solutions subscription To attach the RHEL for SAP Applications or RHEL for SAP Solutions subscriptions, perform the following steps: Find the pool id of the subscription: # subscription-manager list --available --matches='RHEL for SAP*' | \ awk '/Subscription Name:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf ("%s\n\n", USD0)}' Attach the subscription to the system, using the following command. Replace the string <POOL_ID> with the actual pool ID (or one of the pool IDs) shown in the output of the command. # subscription-manager attach --pool=<POOL_ID> | [
"subscription-manager list --consumed | awk '/Subscription Name:/|| /Serial:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf (\"%s\\n\\n\", USD0)}'",
"subscription-manager remove --serial=<SERIAL>",
"subscription-manager list --available --matches='RHEL for SAP*' | awk '/Subscription Name:/|| /Pool ID:/|| /Service Type:/{print} /Service Level:/{printf (\"%s\\n\\n\", USD0)}'",
"subscription-manager attach --pool=<POOL_ID>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/rhel_for_sap_subscriptions_and_repositories/asmb_enable_rhel-for-sap-subscriptions-and-repositories-9 |
10.10. Example - Setting up Cascading Geo-replication | 10.10. Example - Setting up Cascading Geo-replication This section provides step by step instructions to set up a cascading geo-replication session. The configuration of this example has three volumes and the volume names are master-vol, interimmaster-vol, and slave-vol. Verify that your environment matches the minimum system requirements listed in Section 10.3.3, "Prerequisites" . Determine the appropriate deployment scenario. For more information on deployment scenarios, see Section 10.3.1, "Exploring Geo-replication Deployment Scenarios" . Configure the environment and create a geo-replication session between master-vol and interimmaster-vol. Create a common pem pub file, run the following command on the master node where the key-based SSH authentication connection is configured: Create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the interimmaster nodes. Verify the status of the created session by running the following command: Configure the meta-volume for geo-replication: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start a Geo-replication session between the hosts: This command will start distributed geo-replication on all the nodes that are part of the master volume. If a node that is part of the master volume is down, the command will still be successful. In a replica pair, the geo-replication session will be active on any of the replica nodes, but remain passive on the others. After executing the command, it may take a few minutes for the session to initialize and become stable. Verifying the status of geo-replication session by running the following command: Create a geo-replication session between interimmaster-vol and slave-vol. Create a common pem pub file by running the following command on the interimmaster master node where the key-based SSH authentication connection is configured: On interimmaster node, create the geo-replication session using the following command. The push-pem option is needed to perform the necessary pem-file setup on the slave nodes. Verify the status of the created session by running the following command: Configure the meta-volume for geo-replication: For more information on configuring meta-volume, see Section 10.3.5, "Configuring a Meta-Volume" . Start a geo-replication session between interrimaster-vol and slave-vol by running the following command: Verify the status of geo-replication session by running the following: | [
"gluster system:: execute gsec_create",
"gluster volume geo-replication master-vol interimhost.com::interimmaster-vol create push-pem",
"gluster volume geo-replication master-vol interimhost::interimmaster-vol status",
"gluster volume geo-replication master-vol interimhost.com::interimmaster-vol config use_meta_volume true",
"gluster volume geo-replication master-vol interimhost.com::interimmaster-vol start",
"gluster volume geo-replication master-vol interimhost.com::interimmaster-vol status",
"gluster system:: execute gsec_create",
"gluster volume geo-replication interimmaster-vol slave_host.com::slave-vol create push-pem",
"gluster volume geo-replication interrimmaster-vol slave_host::slave-vol status",
"gluster volume geo-replication interrimmaster-vol slave_host::slave-vol config use_meta_volume true",
"gluster volume geo-replication interrimmaster-vol slave_host.com::slave-vol start",
"gluster volume geo-replication interrimmaster-vol slave_host.com::slave-vol status"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/example_-_setting_up_cascading_geo-replication |
Chapter 9. Monitoring clusters that run on RHOSO | Chapter 9. Monitoring clusters that run on RHOSO You can correlate observability metrics for clusters that run on Red Hat OpenStack Services on OpenShift (RHOSO). By collecting metrics from both environments, you can monitor and troubleshoot issues across the infrastructure and application layers. There are two supported methods for metric correlation for clusters that run on RHOSO: Remote writing to an external Prometheus instance. Collecting data from the OpenShift Container Platform federation endpoint to the RHOSO observability stack. 9.1. Remote writing to an external Prometheus instance Use remote write with both Red Hat OpenStack Services on OpenShift (RHOSO) and OpenShift Container Platform to push their metrics to an external Prometheus instance. Prerequisites You have access to an external Prometheus instance. You have administrative access to RHOSO and your cluster. You have certificates for secure communication with mTLS. Your Prometheus instance is configured for client TLS certificates and has been set up as a remote write receiver. The Cluster Observability Operator is installed on your RHOSO cluster. The monitoring stack for your RHOSO cluster is configured to collect the metrics that you are interested in. Telemetry is enabled in the RHOSO environment. Note To verify that the telemetry service is operating normally, entering the following command: USD oc -n openstack get monitoringstacks metric-storage -o yaml The monitoringstacks CRD indicates whether telemetry is enabled correctly. Procedure Configure your RHOSO management cluster to send metrics to Prometheus: Create a secret that is named mtls-bundle in the openstack namespace that contains HTTPS client certificates for authentication to Prometheus by entering the following command: USD oc --namespace openstack \ create secret generic mtls-bundle \ --from-file=./ca.crt \ --from-file=osp-client.crt \ --from-file=osp-client.key Open the controlplane configuration for editing by running the following command: USD oc -n openstack edit openstackcontrolplane/controlplane With the configuration open, replace the .spec.telemetry.template.metricStorage section so that RHOSO sends metrics to Prometheus. As an example: metricStorage: customMonitoringStack: alertmanagerConfig: disabled: false logLevel: info prometheusConfig: scrapeInterval: 30s remoteWrite: - url: https://external-prometheus.example.com/api/v1/write 1 tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key replicas: 2 resourceSelector: matchLabels: service: metricStorage resources: limits: cpu: 500m memory: 512Mi requests: cpu: 100m memory: 256Mi retention: 1d 2 dashboardsEnabled: false dataplaneNetwork: ctlplane enabled: true prometheusTls: {} 1 Replace this URL with the URL of your Prometheus instance. 2 Set a retention period. Optionally, you can reduce retention for local metrics because of external collection. Configure the tenant cluster on which your workloads run to send metrics to Prometheus: Create a cluster monitoring config map as a YAML file. The map must include a remote write configuration and cluster identifiers. As an example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 1d 1 remoteWrite: - url: "https://external-prometheus.example.com/api/v1/write" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ targetLabel: cluster_id action: replace tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key 1 Set a retention period. Optionally, you can reduce retention for local metrics because of external collection. Save the config map as a file called cluster-monitoring-config.yaml . Create a secret that is named mtls-bundle in the openshift-monitoring namespace that contains HTTPS client certificates for authentication to Prometheus by entering the following command: USD oc --namespace openshift-monitoring \ create secret generic mtls-bundle \ --from-file=./ca.crt \ --from-file=ocp-client.crt \ --from-file=ocp-client.key Apply the cluster monitoring configuration by running the following command: USD oc apply -f cluster-monitoring-config.yaml After the changes propagate, you can see aggregated metrics in your external Prometheus instance. Additional resources Configuring remote write storage Adding cluster ID labels to metrics 9.2. Collecting cluster metrics from the federation endpoint You can employ the federation endpoint of your OpenShift Container Platform cluster to make metrics available to a Red Hat OpenStack Services on OpenShift (RHOSO) cluster to practice pull-based monitoring. Prerequisites You have administrative access to RHOSO and the tenant cluster that is running on it. Telemetry is enabled in the RHOSO environment. The Cluster Observability Operator is installed on your cluster. The monitoring stack for your cluster is configured. Your cluster has its federation endpoint exposed. Procedure Connect to your cluster by using a username and password; do not log in by using a kubeconfig file that was generated by the installation program. To retrieve a token from the OpenShift Container Platform cluster, run the following command on it: USD oc whoami -t Make the token available as a secret in the openstack namespace in the RHOSO management cluster by running the following command: USD oc -n openstack create secret generic ocp-federated --from-literal=token=<the_token_fetched_previously> To get the Prometheus federation route URL from your OpenShift Container Platform cluster, run the following command: USD oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={'.status.ingress[].host'} Write a manifest for a scrape configuration and save it as a file called cluster-scrape-config.yaml . As an example: apiVersion: monitoring.rhobs/v1alpha1 kind: ScrapeConfig metadata: labels: service: metricStorage name: sos1-federated namespace: openstack spec: params: 'match[]': - '{__name__=~"kube_node_info|kube_persistentvolume_info|cluster:master_nodes"}' 1 metricsPath: '/federate' authorization: type: Bearer credentials: name: ocp-federated 2 key: token scheme: HTTPS # or HTTP scrapeInterval: 30s 3 staticConfigs: - targets: - prometheus-k8s-federate-openshift-monitoring.apps.openshift.example 4 1 Add metrics here. In this example, only the metrics kube_node_info , kube_persistentvolume_info , and cluster:master_nodes are requested. 2 Insert the previously generated secret name here. 3 Limit scraping to fewer than 1000 samples for each request with a maximum frequency of once every 30 seconds. 4 Insert the URL you fetched previously here. If the endpoint is HTTPS and uses a custom certificate authority, add a tlsConfig section after it. While connected to the RHOSO management cluster, apply the manifest by running the following command: USD oc apply -f cluster-scrape-config.yaml After the config propagates, the cluster metrics are accessible for querying in the OpenShift Container Platform UI in RHOSO. Additional resources Querying metrics by using the federation endpoint for Prometheus 9.3. Available metrics for clusters that run on RHOSO To query metrics and identifying resources across the stack, there are helper metrics that establish a correlation between Red Hat OpenStack Services on OpenShift (RHOSO) infrastructure resources and their representations in the tenant OpenShift Container Platform cluster. To map nodes with RHOSO compute instances, in the metric kube_node_info : node is the Kubernetes node name. provider_id contains the identifier of the corresponding compute service instance. To map persistent volumes with RHOSO block storage or shared filesystems shares, in the metric kube_persistentvolume_info : persistentvolume is the volume name. csi_volume_handle is the block storage volume or share identifier. By default, the compute machines that back the cluster control plane nodes are created in a server group with a soft anti-affinity policy. As a result, the compute service creates them on separate hypervisors on a best-effort basis. However, if the state of the RHOSO cluster is not appropriate for this distribution, the machines are created anyway. In combination with the default soft anti-affinity policy, you can configure an alert that activates when a hypervisor hosts more than one control plane node of a given cluster to highlight the degraded level of high availability. As an example, this PromQL query returns the number of OpenShift Container Platform master nodes per RHOSP host: sum by (vm_instance) ( group by (vm_instance, resource) (ceilometer_cpu) / on (resource) group_right(vm_instance) ( group by (node, resource) ( label_replace(kube_node_info, "resource", "USD1", "system_uuid", "(.+)") ) / on (node) group_left group by (node) ( cluster:master_nodes ) ) ) | [
"oc -n openstack get monitoringstacks metric-storage -o yaml",
"oc --namespace openstack create secret generic mtls-bundle --from-file=./ca.crt --from-file=osp-client.crt --from-file=osp-client.key",
"oc -n openstack edit openstackcontrolplane/controlplane",
"metricStorage: customMonitoringStack: alertmanagerConfig: disabled: false logLevel: info prometheusConfig: scrapeInterval: 30s remoteWrite: - url: https://external-prometheus.example.com/api/v1/write 1 tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key replicas: 2 resourceSelector: matchLabels: service: metricStorage resources: limits: cpu: 500m memory: 512Mi requests: cpu: 100m memory: 256Mi retention: 1d 2 dashboardsEnabled: false dataplaneNetwork: ctlplane enabled: true prometheusTls: {}",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 1d 1 remoteWrite: - url: \"https://external-prometheus.example.com/api/v1/write\" writeRelabelConfigs: - sourceLabels: - __tmp_openshift_cluster_id__ targetLabel: cluster_id action: replace tlsConfig: ca: secret: name: mtls-bundle key: ca.crt cert: secret: name: mtls-bundle key: ocp-client.crt keySecret: name: mtls-bundle key: ocp-client.key",
"oc --namespace openshift-monitoring create secret generic mtls-bundle --from-file=./ca.crt --from-file=ocp-client.crt --from-file=ocp-client.key",
"oc apply -f cluster-monitoring-config.yaml",
"oc whoami -t",
"oc -n openstack create secret generic ocp-federated --from-literal=token=<the_token_fetched_previously>",
"oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath={'.status.ingress[].host'}",
"apiVersion: monitoring.rhobs/v1alpha1 kind: ScrapeConfig metadata: labels: service: metricStorage name: sos1-federated namespace: openstack spec: params: 'match[]': - '{__name__=~\"kube_node_info|kube_persistentvolume_info|cluster:master_nodes\"}' 1 metricsPath: '/federate' authorization: type: Bearer credentials: name: ocp-federated 2 key: token scheme: HTTPS # or HTTP scrapeInterval: 30s 3 staticConfigs: - targets: - prometheus-k8s-federate-openshift-monitoring.apps.openshift.example 4",
"oc apply -f cluster-scrape-config.yaml",
"sum by (vm_instance) ( group by (vm_instance, resource) (ceilometer_cpu) / on (resource) group_right(vm_instance) ( group by (node, resource) ( label_replace(kube_node_info, \"resource\", \"USD1\", \"system_uuid\", \"(.+)\") ) / on (node) group_left group by (node) ( cluster:master_nodes ) ) )"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring/shiftstack-prometheus-configuration |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_managed_cloud_or_operator_environments/providing-feedback |
function::local_clock_ns | function::local_clock_ns Name function::local_clock_ns - Number of nanoseconds on the local cpu's clock Synopsis Arguments None Description This function returns the number of nanoseconds on the local cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy). | [
"local_clock_ns:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-local-clock-ns |
Providing feedback on Red Hat Ceph Storage documentation | Providing feedback on Red Hat Ceph Storage documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so, create a Bugzilla ticket: + . Go to the Bugzilla website. . In the Component drop-down, select Documentation . . In the Sub-Component drop-down, select the appropriate sub-component. . Select the appropriate version of the document. . Fill in the Summary and Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. . Optional: Add an attachment, if any. . Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/release_notes/providing-feedback-on-red-hat-ceph-storage-documentation |
Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta3] | Chapter 7. PriorityLevelConfiguration [flowcontrol.apiserver.k8s.io/v1beta3] Description PriorityLevelConfiguration represents the configuration of a priority level. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PriorityLevelConfigurationSpec specifies the configuration of a priority level. status object PriorityLevelConfigurationStatus represents the current state of a "request-priority". 7.1.1. .spec Description PriorityLevelConfigurationSpec specifies the configuration of a priority level. Type object Required type Property Type Description exempt object ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the spec . limited object LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? type string type indicates whether this priority level is subject to limitation on request execution. A value of "Exempt" means that requests of this priority level are not subject to a limit (and thus are never queued) and do not detract from the capacity made available to other priority levels. A value of "Limited" means that (a) requests of this priority level are subject to limits and (b) some of the server's limited capacity is made available exclusively to this priority level. Required. 7.1.2. .spec.exempt Description ExemptPriorityLevelConfiguration describes the configurable aspects of the handling of exempt requests. In the mandatory exempt configuration object the values in the fields here can be modified by authorized users, unlike the rest of the spec . Type object Property Type Description lendablePercent integer lendablePercent prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. This value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) nominalConcurrencyShares integer nominalConcurrencyShares (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats nominally reserved for this priority level. This DOES NOT limit the dispatching from this priority level but affects the other priority levels through the borrowing mechanism. The server's concurrency limit (ServerCL) is divided among all the priority levels in proportion to their NCS values: NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k) Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of zero. 7.1.3. .spec.limited Description LimitedPriorityLevelConfiguration specifies how to handle requests that are subject to limits. It addresses two issues: - How are requests for this priority level limited? - What should be done with requests that exceed the limit? Type object Property Type Description borrowingLimitPercent integer borrowingLimitPercent , if present, configures a limit on how many seats this priority level can borrow from other priority levels. The limit is known as this level's BorrowingConcurrencyLimit (BorrowingCL) and is a limit on the total number of seats that this level may borrow at any one time. This field holds the ratio of that limit to the level's nominal concurrency limit. When this field is non-nil, it must hold a non-negative integer and the limit is calculated as follows. BorrowingCL(i) = round( NominalCL(i) * borrowingLimitPercent(i)/100.0 ) The value of this field can be more than 100, implying that this priority level can borrow a number of seats that is greater than its own nominal concurrency limit (NominalCL). When this field is left nil , the limit is effectively infinite. lendablePercent integer lendablePercent prescribes the fraction of the level's NominalCL that can be borrowed by other priority levels. The value of this field must be between 0 and 100, inclusive, and it defaults to 0. The number of seats that other levels can borrow from this level, known as this level's LendableConcurrencyLimit (LendableCL), is defined as follows. LendableCL(i) = round( NominalCL(i) * lendablePercent(i)/100.0 ) limitResponse object LimitResponse defines how to handle requests that can not be executed right now. nominalConcurrencyShares integer nominalConcurrencyShares (NCS) contributes to the computation of the NominalConcurrencyLimit (NominalCL) of this level. This is the number of execution seats available at this priority level. This is used both for requests dispatched from this priority level as well as requests dispatched from other priority levels borrowing seats from this level. The server's concurrency limit (ServerCL) is divided among the Limited priority levels in proportion to their NCS values: NominalCL(i) = ceil( ServerCL * NCS(i) / sum_ncs ) sum_ncs = sum[priority level k] NCS(k) Bigger numbers mean a larger nominal concurrency limit, at the expense of every other priority level. This field has a default value of 30. 7.1.4. .spec.limited.limitResponse Description LimitResponse defines how to handle requests that can not be executed right now. Type object Required type Property Type Description queuing object QueuingConfiguration holds the configuration parameters for queuing type string type is "Queue" or "Reject". "Queue" means that requests that can not be executed upon arrival are held in a queue until they can be executed or a queuing limit is reached. "Reject" means that requests that can not be executed upon arrival are rejected. Required. 7.1.5. .spec.limited.limitResponse.queuing Description QueuingConfiguration holds the configuration parameters for queuing Type object Property Type Description handSize integer handSize is a small positive number that configures the shuffle sharding of requests into queues. When enqueuing a request at this priority level the request's flow identifier (a string pair) is hashed and the hash value is used to shuffle the list of queues and deal a hand of the size specified here. The request is put into one of the shortest queues in that hand. handSize must be no larger than queues , and should be significantly smaller (so that a few heavy flows do not saturate most of the queues). See the user-facing documentation for more extensive guidance on setting this field. This field has a default value of 8. queueLengthLimit integer queueLengthLimit is the maximum number of requests allowed to be waiting in a given queue of this priority level at a time; excess requests are rejected. This value must be positive. If not specified, it will be defaulted to 50. queues integer queues is the number of queues for this priority level. The queues exist independently at each apiserver. The value must be positive. Setting it to 1 effectively precludes shufflesharding and thus makes the distinguisher method of associated flow schemas irrelevant. This field has a default value of 64. 7.1.6. .status Description PriorityLevelConfigurationStatus represents the current state of a "request-priority". Type object Property Type Description conditions array conditions is the current state of "request-priority". conditions[] object PriorityLevelConfigurationCondition defines the condition of priority level. 7.1.7. .status.conditions Description conditions is the current state of "request-priority". Type array 7.1.8. .status.conditions[] Description PriorityLevelConfigurationCondition defines the condition of priority level. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 7.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations DELETE : delete collection of PriorityLevelConfiguration GET : list or watch objects of kind PriorityLevelConfiguration POST : create a PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations GET : watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name} DELETE : delete a PriorityLevelConfiguration GET : read the specified PriorityLevelConfiguration PATCH : partially update the specified PriorityLevelConfiguration PUT : replace the specified PriorityLevelConfiguration /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations/{name} GET : watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name}/status GET : read status of the specified PriorityLevelConfiguration PATCH : partially update status of the specified PriorityLevelConfiguration PUT : replace status of the specified PriorityLevelConfiguration 7.2.1. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations HTTP method DELETE Description delete collection of PriorityLevelConfiguration Table 7.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityLevelConfiguration Table 7.3. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityLevelConfiguration Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 202 - Accepted PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.2. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations HTTP method GET Description watch individual changes to a list of PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 7.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.3. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name} Table 7.8. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method DELETE Description delete a PriorityLevelConfiguration Table 7.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityLevelConfiguration Table 7.11. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityLevelConfiguration Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityLevelConfiguration Table 7.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.15. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.16. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty 7.2.4. /apis/flowcontrol.apiserver.k8s.io/v1beta3/watch/prioritylevelconfigurations/{name} Table 7.17. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method GET Description watch changes to an object of kind PriorityLevelConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.5. /apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations/{name}/status Table 7.19. Global path parameters Parameter Type Description name string name of the PriorityLevelConfiguration HTTP method GET Description read status of the specified PriorityLevelConfiguration Table 7.20. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified PriorityLevelConfiguration Table 7.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.22. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified PriorityLevelConfiguration Table 7.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.24. Body parameters Parameter Type Description body PriorityLevelConfiguration schema Table 7.25. HTTP responses HTTP code Reponse body 200 - OK PriorityLevelConfiguration schema 201 - Created PriorityLevelConfiguration schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/schedule_and_quota_apis/prioritylevelconfiguration-flowcontrol-apiserver-k8s-io-v1beta3 |
Preface | Preface As a data scientist, you can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. This enables you to standardize and automate machine learning workflows to enable you to develop and deploy your data science models. For example, the steps in a machine learning workflow might include items such as data extraction, data processing, feature extraction, model training, model validation, and model serving. Automating these activities enables your organization to develop a continuous process of retraining and updating a model based on newly received data. This can help address challenges related to building an integrated machine learning deployment and continuously operating it in production. You can also use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab. For more information, see Working with pipelines in JupyterLab . From OpenShift AI version 2.9, data science pipelines are based on KubeFlow Pipelines (KFP) version 2.0 . For more information, see Migrating to data science pipelines 2.0 . To use a data science pipeline in OpenShift AI, you need the following components: Pipeline server : A server that is attached to your data science project and hosts your data science pipeline. Pipeline : A pipeline defines the configuration of your machine learning workflow and the relationship between each component in the workflow. Pipeline code: A definition of your pipeline in a YAML file. Pipeline graph: A graphical illustration of the steps executed in a pipeline run and the relationship between them. Pipeline experiment : A workspace where you can try different configurations of your pipelines. You can use experiments to organize your runs into logical groups. Archived pipeline experiment: An archived pipeline experiment. Pipeline artifact: An output artifact produced by a pipeline component. Pipeline execution: The execution of a task in a pipeline. Pipeline run : An execution of your pipeline. Active run: A pipeline run that is executing, or stopped. Scheduled run: A pipeline run that is scheduled to execute at least once. Archived run: An archived pipeline run. This feature is based on Kubeflow Pipelines 2.0. Use the latest Kubeflow Pipelines 2.0 SDK to build your data science pipeline in Python code. After you have built your pipeline, use the SDK to compile it into an Intermediate Representation (IR) YAML file. The OpenShift AI user interface enables you to track and manage pipelines, experiments, and pipeline runs. To view a record of previously executed, scheduled, and archived runs, you can go to Data Science Pipelines Runs , or you can select an experiment from the Experiments Experiments and Runs to access all of its pipeline runs. You can manage incremental changes to pipelines in OpenShift AI by using versioning. This allows you to develop and deploy pipelines iteratively, preserving a record of your changes. You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_with_data_science_pipelines/pr01 |
Chapter 19. Squid Caching Proxy | Chapter 19. Squid Caching Proxy Squid is a high-performance proxy caching server for web clients, supporting FTP, Gopher, and HTTP data objects. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. [17] In Red Hat Enterprise Linux, the squid package provides the Squid Caching Proxy. Enter the following command to see if the squid package is installed: If it is not installed and you want to use squid, use the yum utility as root to install it: 19.1. Squid Caching Proxy and SELinux When SELinux is enabled, Squid runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the Squid processes running in their own domain. This example assumes the squid package is installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as the root user to start the squid daemon: Confirm that the service is running. The output should include the information below (only the time stamp will differ): Enter the following command to view the squid processes: The SELinux context associated with the squid processes is system_u:system_r:squid_t:s0 . The second last part of the context, squid_t , is the type. A type defines a domain for processes and a type for files. In this case, the Squid processes are running in the squid_t domain. SELinux policy defines how processes running in confined domains, such as squid_t , interact with files, other processes, and the system in general. Files must be labeled correctly to allow squid access to them. When the /etc/squid/squid.conf file is configured so squid listens on a port other than the default TCP ports 3128, 3401 or 4827, the semanage port command must be used to add the required port number to the SELinux policy configuration. The following example demonstrates configuring squid to listen on a port that is not initially defined in SELinux policy configuration for it, and, as a consequence, the server failing to start. This example also demonstrates how to then configure the SELinux system to allow the daemon to successfully listen on a non-standard port that is not already defined in the policy. This example assumes the squid package is installed. Run each command in the example as the root user: Confirm the squid daemon is not running: If the output differs, stop the process: Enter the following command to view the ports SELinux allows squid to listen on: Edit /etc/squid/squid.conf as root. Configure the http_port option so it lists a port that is not configured in SELinux policy configuration for squid . In this example, the daemon is configured to listen on port 10000: Run the setsebool command to make sure the squid_connect_any Boolean is set to off. This ensures squid is only permitted to operate on specific ports: Start the squid daemon: An SELinux denial message similar to the following is logged: For SELinux to allow squid to listen on port 10000, as used in this example, the following command is required: Start squid again and have it listen on the new port: Now that SELinux has been configured to allow Squid to listen on a non-standard port (TCP 10000 in this example), it starts successfully on this port. [17] See the Squid Caching Proxy project page for more information. | [
"~]USD rpm -q squid package squid is not installed",
"~]# yum install squid",
"~]USD getenforce Enforcing",
"~]# systemctl start squid.service",
"~]# systemctl status squid.service squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled) Active: active (running) since Mon 2013-08-05 14:45:53 CEST; 2s ago",
"~]USD ps -eZ | grep squid system_u:system_r:squid_t:s0 27018 ? 00:00:00 squid system_u:system_r:squid_t:s0 27020 ? 00:00:00 log_file_daemon",
"~]# systemctl status squid.service squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled) Active: inactive (dead)",
"~]# systemctl stop squid.service",
"~]# semanage port -l | grep -w -i squid_port_t squid_port_t tcp 3401, 4827 squid_port_t udp 3401, 4827",
"Squid normally listens to port 3128 http_port 10000",
"~]# setsebool -P squid_connect_any 0",
"~]# systemctl start squid.service Job for squid.service failed. See 'systemctl status squid.service' and 'journalctl -xn' for details.",
"localhost setroubleshoot: SELinux is preventing the squid (squid_t) from binding to port 10000. For complete SELinux messages. run sealert -l 97136444-4497-4fff-a7a7-c4d8442db982",
"~]# semanage port -a -t squid_port_t -p tcp 10000",
"~]# systemctl start squid.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-squid_caching_proxy |
Chapter 1. Introduction to Hammer | Chapter 1. Introduction to Hammer Hammer is a powerful command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Red Hat Satellite Server either through CLI commands or automation in shell scripts. Hammer also provides an interactive shell. Hammer compared to Satellite web UI Compared to navigating the web UI, using Hammer can result in much faster interaction with the Satellite Server, as common shell features such as environment variables and aliases are at your disposal. You can also incorporate Hammer commands into reusable scripts for automating tasks of various complexity. Output from Hammer commands can be redirected to other tools, which allows for integration with your existing environment. You can issue Hammer commands directly on the base operating system running Red Hat Satellite. Access to Satellite Server's base operating system is required to issue Hammer commands, which can limit the number of potential users compared to the web UI. Although the parity between Hammer and the web UI is almost complete, the web UI has development priority and can be ahead especially for newly introduced features. Hammer compared to Satellite API For many tasks, both Hammer and Satellite API are equally applicable. Hammer can be used as a human friendly interface to Satellite API, for example to test responses to API calls before applying them in a script (use the -d option to inspect API calls issued by Hammer, for example hammer -d organization list ). Changes in the API are automatically reflected in Hammer, while scripts using the API directly have to be updated manually. In the background, each Hammer command first establishes a binding to the API, then sends a request. This can have performance implications when executing a large number of Hammer commands in sequence. In contrast, a script communicating directly with the API establishes the binding only once. See the API Guide for more information. 1.1. Getting help View the full list of hammer options and subcommands by executing: Use --help to inspect any subcommand, for example: You can search the help output using grep , or redirect it to a text viewer, for example: 1.2. Authentication A Satellite user must prove their identity to Red Hat Satellite when entering hammer commands. Hammer commands can be run manually or automatically. In either case, hammer requires Satellite credentials for authentication. There are three methods of hammer authentication: Hammer authentication session Storing credentials in the hammer configuration file Providing credentials with each hammer command The hammer configuration file method is recommended when running commands automatically. For example, running Satellite maintenance commands from a cron job. When running commands manually, Red Hat recommends using the hammer authentication session and providing credentials with each command. 1.2.1. Hammer authentication session The hammer authentication session is a cache that stores your credentials, and you have to provide them only once, at the beginning of the session. This method is suited to running several hammer commands in succession, for example a script containing hammer commands. In this scenario, you enter your Satellite credentials once, and the script runs as expected. By using the hammer authentication session, you avoid storing your credentials in the script itself and in the ~/.hammer/cli.modules.d/foreman.yml hammer configuration file. See the instructions on how to use the sessions: To enable sessions, add :use_sessions: true to the ~/.hammer/cli.modules.d/foreman.yml file: Note that if you enable sessions, credentials stored in the configuration file will be ignored. To start a session, enter the following command: You are prompted for your Satellite credentials, and logged in. You will not be prompted for the credentials again until your session expires. The default length of a session is 60 minutes. You can change the time to suit your preference. For example, to change it to 30 minutes, enter the following command: To see the current status of the session, enter the following command: To end the session, enter the following command: 1.2.2. Hammer configuration file If you ran the Satellite installation with --foreman-initial-admin-username and --foreman-initial-admin-password options, credentials you entered are stored in the ~/.hammer/cli.modules.d/foreman.yml configuration file, and hammer does not prompt for your credentials. You can also add your credentials to the ~/.hammer/cli.modules.d/foreman.yml configuration file manually: Important Use only spaces for indentation in hammer configuration files. Do not use tabs for indentation in hammer configuration files. 1.2.3. Command line If you do not have your Satellite credentials saved in the ~/.hammer/cli.modules.d/foreman.yml configuration file, hammer prompts you for them each time you enter a command. You can specify your credentials when executing a command as follows: Note Examples in this guide assume that you have saved credentials in the configuration file, or are using a hammer authentication session. 1.3. Using standalone hammer You can install hammer on a host running Red Hat Enterprise Linux 8 that has no Satellite Server installed, and use it to connect the host to a remote Satellite. Prerequisites Ensure that you register the host to Satellite Server or Capsule Server. For more information, see Registering Hosts in Managing hosts . Ensure that you synchronize the following repositories on Satellite Server or Capsule Server. For more information, see Synchronizing Repositories in Managing content . rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-utils-6.15-for-rhel-8-x86_64-rpms Procedure On a host, complete the following steps to install hammer : Enable the required repositories: Enable the Satellite Utils module: Install hammer : Edit the :host: entry in the /etc/hammer/cli.modules.d/foreman.yml file to include the Satellite IP address or FQDN. 1.4. Setting a default organization and location Many hammer commands are organization specific. You can set a default organization and location for hammer commands so that you do not have to specify them every time with the --organization and --location options. Specifying a default organization is useful when you mostly manage a single organization, as it makes your commands shorter. However, when you switch to a different organization, you must use hammer with the --organization option to specify it. Procedure To set a default organization and location, complete the following steps: To set a default organization, enter the following command: You can find the name of your organization with the hammer organization list command. Optional: To set a default location, enter the following command: You can find the name of your location with the hammer location list command. To verify the currently specified default settings, enter the following command: 1.5. Configuring Hammer The default location for global hammer configuration is: /etc/hammer/cli_config.yml for general hammer settings /etc/hammer/cli.modules.d/ for CLI module configuration files You can set user specific directives for hammer (in ~/.hammer/cli_config.yml ) as well as for CLI modules (in respective .yml files under ~/.hammer/cli.modules.d/ ). To see the order in which configuration files are loaded, as well as versions of loaded modules, use: Note Loading configuration for many CLI modules can slow down the execution of hammer commands. In such a case, consider disabling CLI modules that are not regularly used. Apart from saving credentials as described in Section 1.2, "Authentication" , you can set several other options in the ~/.hammer/ configuration directory. For example, you can change the default log level and set log rotation with the following directives in ~/.hammer/cli_config.yml . These directives affect only the current user and are not applied globally. Similarly, you can configure user interface settings. For example, set the number of entries displayed per request in the Hammer output by changing the following line: This setting is an equivalent of the --per-page Hammer option. 1.6. Configuring Hammer logging You can set hammer to log debugging information for various Satellite components. You can set debug or normal configuration options for all Satellite components. Note After changing hammer's logging behavior, you must restart Satellite services. To set debug level for all components, use the following command: To set production level logging, use the following command: To list the currently recognized components, that you can set logging for: To list all available logging options: 1.7. Invoking the Hammer shell You can issue hammer commands through the interactive shell. To invoke the shell, issue the following command: In the shell, you can enter sub-commands directly without typing "hammer", which can be useful for testing commands before using them in a script. To exit the shell, type exit or press Ctrl + D . 1.8. Generating formatted output You can modify the default formatting of the output of hammer commands to simplify the processing of this output by other command line tools and applications. For example, to list organizations in a CSV format with a custom separator (in this case a semicolon), use the following command: Output in CSV format is useful for example when you need to parse IDs and use them in a for loop. Several other formatting options are available with the --output option: Replace output_format with one of: table - generates output in the form of a human readable table (default). base - generates output in the form of key-value pairs. yaml - generates output in the YAML format. csv - generates output in the Comma Separated Values format. To define a custom separator, use the --csv and --csv-separator options instead. json - generates output in the JavaScript Object Notation format. silent - suppresses the output. 1.9. Hiding header output from Hammer commands When you use any hammer command, you have the option of hiding headers from the output. If you want to pipe or use the output in custom scripts, hiding the output is useful. To hide the header output, add the --no-headers option to any hammer command. 1.10. Using JSON for complex parameters JSON is the preferred way to describe complex parameters. An example of JSON formatted content appears below: 1.11. Troubleshooting with Hammer You can use the hammer ping command to check the status of core Satellite services. Together with the satellite-maintain service status command, this can help you to diagnose and troubleshoot Satellite issues. If all services are running as expected, the output looks as follows: | [
"USD hammer --help",
"USD hammer organization --help",
"USD hammer | less",
":foreman: :use_sessions: true",
"hammer auth login",
"hammer settings set --name idle_timeout --value 30 Setting [idle_timeout] updated to [30]",
"hammer auth status",
"hammer auth logout",
":foreman: :username: ' username ' :password: ' password '",
"USD hammer -u username -p password subcommands",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-utils:el8",
"{package-install} rubygem-hammer_cli_katello",
":host: 'https:// satellite.example.com '",
"hammer defaults add --param-name organization --param-value \"Your_Organization\"",
"hammer defaults add --param-name location --param-value \"Your_Location\"",
"hammer defaults list",
"hammer -d --version",
":log_level: 'warning' :log_size: 5 #in MB",
":per_page: 30",
"satellite-maintain service restart",
"hammer admin logging --all --level-debug satellite-maintain service restart",
"hammer admin logging --all --level-production satellite-maintain service restart",
"hammer admin logging --list",
"hammer admin logging --help Usage: hammer admin logging [OPTIONS]",
"hammer shell",
"hammer --csv --csv-separator \";\" organization list",
"hammer --output output_format organization list",
"hammer compute-profile values create --compute-profile-id 22 --compute-resource-id 1 --compute-attributes= '{ \"cpus\": 2, \"corespersocket\": 2, \"memory_mb\": 4096, \"firmware\": \"efi\", \"resource_pool\": \"Resources\", \"cluster\": \"Example_Cluster\", \"guest_id\": \"rhel8\", \"path\": \"/Datacenters/EXAMPLE/vm/\", \"hardware_version\": \"Default\", \"memoryHotAddEnabled\": 0, \"cpuHotAddEnabled\": 0, \"add_cdrom\": 0, \"boot_order\": [ \"disk\", \"network\" ], \"scsi_controllers\":[ { \"type\": \"ParaVirtualSCSIController\", \"key\":1000 }, { \"type\": \"ParaVirtualSCSIController\", \"key\":1001 }it ] }'",
"hammer ping candlepin: Status: ok Server Response: Duration: 22ms candlepin_auth: Status: ok Server Response: Duration: 17ms pulp: Status: ok Server Response: Duration: 41ms pulp_auth: Status: ok Server Response: Duration: 23ms foreman_tasks: Status: ok Server Response: Duration: 33ms"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/hammer_cli_guide/chap-cli_guide-introduction_to_hammer |
Dashboard Guide | Dashboard Guide Red Hat Ceph Storage 5 Monitoring Ceph Cluster with Ceph Dashboard Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/index |
Declarative cluster configuration | Declarative cluster configuration Red Hat OpenShift GitOps 1.11 Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/declarative_cluster_configuration/index |
Chapter 3. Getting started | Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.rb example. USD cd /usr/share/proton/examples/ruby/ USD ruby helloworld.rb amqp://127.0.0.1 examples Hello World! | [
"cd /usr/share/proton/examples/ruby/ ruby helloworld.rb amqp://127.0.0.1 examples Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_ruby_client/getting_started |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/preface |
Chapter 6. Managing pipeline runs | Chapter 6. Managing pipeline runs Using Pipelines as Code, you can create pipelines in your code repository and run these pipelines. 6.1. Parameters and annotations in a Pipelines as Code pipeline run To run pipelines using Pipelines as Code, you can create pipeline run definitions or templates as YAML files in the .tekton/ directory of your Git repository. You can reference YAML files in other repositories using remote URLs, but pipeline runs are triggered only by events in the repository containing the .tekton/ directory. The Pipelines as Code resolver bundles the pipeline runs with all tasks as a single pipeline run without external dependencies. In addition to features that exist in all pipeline runs, you can use additional parameters and annotations in pipeline run files for Pipelines as Code. Note For pipelines, use at least one pipeline run with a spec, or a separated Pipeline object. For tasks, embed the task specification inside a pipeline, or define it separately as a Task object. 6.1.1. Parameters in a pipeline run specification You can use parameters in a pipeline run specification to provide information about the commit that triggered the pipeline run and to use the temporary GitHub App token for Github API operations. 6.1.1.1. Commit and URL information You can specify the parameters of your commit and URL by using dynamic, expandable variables with the {{<var>}} format. Currently, you can use the following variables: {{repo_owner}} : The repository owner. {{repo_name}} : The repository name. {{repo_url}} : The repository full URL. {{revision}} : Full SHA revision of a commit. {{sender}} : The username or account ID of the sender of the commit. {{source_branch}} : The branch name where the event originated. {{target_branch}} : The branch name that the event targets. For push events, it is the same as the source_branch . {{pull_request_number}} : The pull or merge request number, defined only for a pull_request event type. {{git_auth_secret}} : The secret name that is generated automatically with the Git provider token for checking out private repos. 6.1.1.2. Temporary GitHub App token for GitHub API operations You can use the temporary installation token generated by Pipelines as Code from the GitHub App to access the GitHub API. The GitHub App generates a key for private repositories in the git-provider-token key. You can use the {{git_auth_secret}} dynamic variable in pipeline runs to access this key. For example, if your pipeline run must add a comment to a pull request, you can use the a Pipelines as Code annotation to fetch the github-add-comment task definition from Tekton Hub, and then define the task that adds the comment, as shown in the following example: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-with-comment annotations: pipelinesascode.tekton.dev/task: "github-add-comment" spec: pipelineSpec: tasks: - name: add-sample-comment taskRef: name: github-add-comment params: - name: REQUEST_URL value: "{{ repo_url }}/pull/{{ pull_request_number }}" 1 - name: COMMENT_OR_FILE value: "Pipelines as Code IS GREAT!" - name: GITHUB_TOKEN_SECRET_NAME value: "{{ git_auth_secret }}" - name: GITHUB_TOKEN_SECRET_KEY value: "git-provider-token" 1 By using the dynamic variables, you can reuse this snippet template for any pull request from any repository that you use with Pipelines as Code. Note On GitHub Apps, the generated installation token is available for 8 hours and scoped to the repository from where the events originate. You can configure the scope differently, but the expiration time is determined by GitHub. Additional resources Scoping the GitHub token to additional repositories 6.1.2. Annotations for matching events to a pipeline run You can match different Git provider events with each pipeline run by using annotations on the pipeline run. If there are multiple pipeline runs matching an event, Pipelines as Code runs them in parallel and posts the results to the Git provider as soon as a pipeline run finishes. 6.1.2.1. Matching a pull request event to a pipeline run You can use the following example to match the pipeline-pr-main pipeline run with a pull_request event that targets the main branch: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-pr-main annotations: pipelinesascode.tekton.dev/on-target-branch: "[main]" 1 pipelinesascode.tekton.dev/on-event: "[pull_request]" # ... 1 You can specify multiple branches by adding comma-separated entries. For example, "[main, release-nightly]" . In addition, you can specify the following items: Full references to branches such as "refs/heads/main" Globs with pattern matching such as "refs/heads/\*" Tags such as "refs/tags/1.\*" 6.1.2.2. Matching a push event to a pipeline run You can use the following example to match the pipeline-push-on-main pipeline run with a push event targeting the refs/heads/main branch: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-push-on-main annotations: pipelinesascode.tekton.dev/on-target-branch: "[refs/heads/main]" 1 pipelinesascode.tekton.dev/on-event: "[push]" # ... 1 You can specify multiple branches by adding comma-separated entries. For example, "[main, release-nightly]" . In addition, you can specify the following items: Full references to branches such as "refs/heads/main" Globs with pattern matching such as "refs/heads/\*" Tags such as "refs/tags/1.\*" 6.1.2.3. Matching changes in paths to a pipeline run You can match a pipeline run to changes in a set of paths. Pipelines as Code starts the pipeline run when a pull request includes changes in any of the paths that you list. The * wildcard denotes any file in the directory. The ** wildcard denotes any file in the directory or any subdirectories on any level under the directory. You can use the following example to match the pipeline-pkg-or-cli pipeline run when a pull request changes any files in the pkg directory, the cli directory, or any subdirectories under the cli directory. apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-pkg-or-cli annotations: pipelinesascode.tekton.dev/on-path-changed: "["pkg/*", "cli/**"]" # ... 6.1.2.4. Excluding changes in paths from matching a pipeline run You can configure a pipeline run to exclude matching if a pull request makes changes only to files in a specified set of paths. If the pipeline run matches an event but the pull request includes changes only to files in the paths that you list, Pipelines as Code does not start the pipeline run. You can use the following example to match the pipeline-docs-not-generated pipeline run when a pull request changes any files under the docs directory or its subdirectories, except when the changes apply only to the docs/generated directory or its subdirectories. apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-docs-not-generated annotations: pipelinesascode.tekton.dev/on-path-changed: "["docs/**"]" pipelinesascode.tekton.dev/on-path-changed-ignore: "["docs/generated/**"]" # ... You can use the following example to match the pipeline-main-not-docs pipeline run when a pull request targets the main branch, except when the changes apply only to the docs directory or its subdirectories. apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-main-not-docs annotations: pipelinesascode.tekton.dev/on-target-branch: "[main]" pipelinesascode.tekton.dev/on-event: "[pull_request]" pipelinesascode.tekton.dev/on-path-changed-ignore: "["docs/**"]" # ... 6.1.2.5. Matching a pull request label to a pipeline run Important Matching pull request labels to a pipeline run is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can match a pipeline run to one or several pull request labels. Pipelines as Code starts the pipeline run when any of these labels is added to a pull request. When the pull request is updated with a new commit, if the pull request still has the label, Pipelines as Code starts the pipeline run again. You can use the following example to match the pipeline-bug-or-defect pipeline run when either the bug label or the defect label is added to a pull request, and also when a pull request with this label is updated with a new commit: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-bug-or-defect annotations: pipelinesascode.tekton.dev/on-label: "[bug, defect]" # ... Note The current version of Pipelines as Code supports matching events to pull request labels only for the GitHub, Gitea, and GitLab repository hosting service providers. 6.1.2.6. Matching a comment event to a pipeline run Important Matching a comment event to a pipeline run is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use the following example to match the pipeline-comment pipeline run with a comment on a pull request, when the text of the comment matches the ^/merge-pr regular expression: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-comment annotations: pipelinesascode.tekton.dev/on-comment: "^/merge-pr" # ... The pipeline run starts only if the comment author meets one of the following requirements: The author is the owner of the repository. The author is a collaborator on the repository. The author is a public member on the organization of the repository. The comment author is listed in the approvers or reviewers section of the OWNERS file in the root of the repository, as defined in the Kubernetes documentation . Pipelines as Code supports the specification for the OWNERS and OWNERS_ALIASES files. If the OWNERS file includes a filters section, Pipelines as Code matches approvers and reviewers only against the .* filter. 6.1.2.7. Advanced event matching Pipelines as Code supports using Common Expression Language (CEL) based filtering for advanced event matching. If you have the pipelinesascode.tekton.dev/on-cel-expression annotation in your pipeline run, Pipelines as Code uses the CEL expression and skips the on-target-branch annotation. Compared to the simple on-target-branch annotation matching, the CEL expressions allow complex filtering and negation. To use CEL-based filtering with Pipelines as Code, consider the following examples of annotations: To match a pull_request event targeting the main branch and coming from the wip branch: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip" ... To run a pipeline only if a path has changed, you can use the .pathChanged suffix function with a glob pattern: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-pathchanged annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/\*.md".pathChanged() 1 # ... 1 Matches all markdown files in the docs directory. To match all pull requests starting with the title [DOWNSTREAM] : apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-downstream annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request && event_title.startsWith("[DOWNSTREAM]") # ... To run a pipeline on a pull_request event, but skip the experimental branch: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-not-experimental annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch != experimental" # ... For advanced CEL-based filtering while using Pipelines as Code, you can use the following fields and suffix functions: event : A push or pull_request event. target_branch : The target branch. source_branch : The branch of origin of a pull_request event. For push events, it is same as the target_branch . event_title : Matches the title of the event, such as the commit title for a push event, and the title of a pull or merge request for a pull_request event. Currently, only GitHub, Gitlab, and Bitbucket Cloud are the supported providers. .pathChanged : A suffix function to a string. The string can be a glob of a path to check if the path has changed. Currently, only GitHub and Gitlab are supported as providers. In addition, you can access the full payload as passed by the Git repository provider. Use the headers field to access the headers of the payload, for example, headers['x-github-event'] . Use the body field to access the body of the payload, for example, body.pull_request.state . Important Using the header and body of the payload for CEL-based filtering with Pipelines as Code is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . In the following example, the pipeline run starts only if all of the following conditions are true: The pull request is targeting the main branch. The author of the pull request is superuser . The action is synchronize ; this action triggers when an update occurs on a pull request. apiVersion: tekton.dev/v1 kind: PipelineRun metadata: annotations: pipelinesascode.tekton.dev/on-cel-expression: | body.pull_request.base.ref == "main" && body.pull_request.user.login == "superuser" && body.action == "synchronize" # ... Note If you use the header or body field for event matching, you might be unable to trigger the pipeline run using Git commands such as retest . If you use a Git command, the payload body is the comment that contains this command, and not the original payload. If you want to trigger the pipeline run again when using the body field for event matching, you can close and reopen the pull request or merge request, or alternatively add a new SHA commit. You can add a new SHA commit by using the following command: git commit --amend --no-edit && git push --force-with-lease Additional resources CEL language specification 6.1.3. Annotations for specifying automatic cancellation-in-progress for a pipeline run By default, Pipelines as Code does not cancel pipeline runs automatically. Every pipeline run that Pipelines as Code creates and starts executes until it completes. However, events that trigger pipeline runs can come in quick succession. For example, if a pull request triggers a pipeline run and then the user pushes new commits into the pull request source branch, each push triggers a new copy of the pipeline run. If several pushes happen, several copies can run, which can consume excessive cluster resources. You can configure a pipeline run to enable automatic cancellation-in-progress. If you enable automatic cancellation for a pipeline run, Pipelines as Code cancels the pipeline run in the following situations: Pipelines as Code has successfully started a copy of the same pipeline run for the same pull request or the same source branch. The pull request that triggered the pipeline run is merged or closed. You can use the following example to enable automatic cancellation when you create the sample-pipeline pipeline run: apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: sample-pipeline annotations: pipelinesascode.tekton.dev/cancel-in-progress: "true" # ... Note Pipelines as Code cancels a pipeline run after starting a new copy of this pipeline run successfully. The pipelinesascode.tekton.dev/cancel-in-progress setting does not ensure that only one copy of the pipeline run is executing at any time. Important Automatic cancellation-in-progress of pipeline runs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.2. Running a pipeline run using Pipelines as Code With default configuration, Pipelines as Code runs any pipeline run in the .tekton/ directory of the default branch of repository, when specified events such as pull request or push occurs on the repository. For example, if a pipeline run on the default branch has the annotation pipelinesascode.tekton.dev/on-event: "[pull_request]" , it will run whenever a pull request event occurs. In the event of a pull request or a merge request, Pipelines as Code also runs pipelines from branches other than the default branch, if the following conditions are met by the author of the pull request: The author is the owner of the repository. The author is a collaborator on the repository. The author is a public member on the organization of the repository. The pull request author is listed in the approvers or reviewers section of the OWNERS file in the root of the repository, as defined in the Kubernetes documentation . Pipelines as Code supports the specification for the OWNERS and OWNERS_ALIASES files. If the OWNERS file includes a filters section, Pipelines as Code matches approvers and reviewers only against the .* filter. If the pull request author does not meet the requirements, another user who meets the requirements can comment /ok-to-test on the pull request, and start the pipeline run. Pipeline run execution A pipeline run always runs in the namespace of the Repository custom resource definition (CRD) associated with the repository that generated the event. You can observe the execution of your pipeline runs using the tkn pac CLI tool. To follow the execution of the last pipeline run, use the following example: USD tkn pac logs -n <my-pipeline-ci> -L 1 1 my-pipeline-ci is the namespace for the Repository CRD. To follow the execution of any pipeline run interactively, use the following example: USD tkn pac logs -n <my-pipeline-ci> 1 1 my-pipeline-ci is the namespace for the Repository CRD. If you need to view a pipeline run other than the last one, you can use the tkn pac logs command to select a PipelineRun attached to the repository: If you have configured Pipelines as Code with a GitHub App, Pipelines as Code posts a URL in the Checks tab of the GitHub App. You can click the URL and follow the pipeline execution. 6.3. Restarting or canceling a pipeline run using Pipelines as Code You can restart or cancel a pipeline run with no events, such as sending a new commit to your branch or raising a pull request. To restart all pipeline runs, use the Re-run all checks feature in the GitHub App. To restart all or specific pipeline runs, use the following comments: The /test and /retest comment restarts all pipeline runs. The /test <pipeline_run_name> and /retest <pipeline_run_name> comment starts or restarts a specific pipeline run. You can use this command to start any Pipelines as Code pipeline run on the repository, whether or not it was triggered by an event for this pipeline run. To cancel all or specific pipeline runs, use the following comments: The /cancel comment cancels all pipeline runs. The /cancel <pipeline_run_name> comment cancels a specific pipeline run. The results of the comments are visible under the Checks tab of the GitHub App. The comment starts, restarts, or cancels any pipeline runs only if the comment author meets one of the following requirements: The author is the owner of the repository. The author is a collaborator on the repository. The author is a public member on the organization of the repository. The comment author is listed in the approvers or reviewers section of the OWNERS file in the root of the repository, as defined in the Kubernetes documentation . Pipelines as Code supports the specification for the OWNERS and OWNERS_ALIASES files. If the OWNERS file includes a filters section, Pipelines as Code matches approvers and reviewers only against the .* filter. Important Using a comment to start a pipeline run that does not match an event is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure If you target a pull request and you use the GitHub App, go to the Checks tab and click Re-run all checks . If you target a pull or merge request, use the comments inside your pull request: Example comment that cancels all pipeline runs If you target a push request, include the comments within your commit messages. Note This feature is supported for the GitHub provider only. Go to your GitHub repository. Click the Commits section. Click the commit where you want to restart a pipeline run. Click on the line number where you want to add a comment. Example comment that starts or restarts a specific pipeline run Note If you run a command on a commit that exists in multiple branches within a push request, the branch with the latest commit is used. This results in two situations: If you run a command on a commit without any argument, such as /test , the test is automatically performed on the main branch. If you include a branch specification, such as /test branch:user-branch , the test is performed on the commit where the comment is located with the context of the user-branch branch. 6.4. Monitoring pipeline run status using Pipelines as Code Depending on the context and supported tools, you can monitor the status of a pipeline run in different ways. Status on GitHub Apps When a pipeline run finishes, the status is added in the Check tabs with limited information on how long each task of your pipeline took, and the output of the tkn pipelinerun describe command. Log error snippet When Pipelines as Code detects an error in one of the tasks of a pipeline, a small snippet consisting of the last 3 lines in the task breakdown of the first failed task is displayed. Note Pipelines as Code avoids leaking secrets by looking into the pipeline run and replacing secret values with hidden characters. However, Pipelines as Code cannot hide secrets coming from workspaces and envFrom source. Annotations for log error snippets In the TektonConfig custom resource, in the pipelinesAsCode.settings spec, you can set the error-detection-from-container-logs parameter to true . In this case, Pipelines as Code detects the errors from the container logs and adds them as annotations on the pull request where the error occurred. Important Adding annotations for log error snippets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Currently, Pipelines as Code supports only the simple cases where the error looks like makefile or grep output of the following format: <filename>:<line>:<column>: <error message> You can customize the regular expression used to detect the errors with the error-detection-simple-regexp parameter. The regular expression uses named groups to give flexibility on how to specify the matching. The groups needed to match are filename , line , and error . You can view the Pipelines as Code config map for the default regular expression. Note By default, Pipelines as Code scans only the last 50 lines of the container logs. You can increase this value in the error-detection-max-number-of-lines field or set -1 for an unlimited number of lines. However, such configurations may increase the memory usage of the watcher. Status for webhook For webhook, when the event is a pull request, the status is added as a comment on the pull or merge request. Failures If a namespace is matched to a Repository custom resource definition (CRD), Pipelines as Code emits its failure log messages in the Kubernetes events inside the namespace. Status associated with Repository CRD The last 5 status messages for a pipeline run is stored inside the Repository custom resource. USD oc get repo -n <pipelines-as-code-ci> NAME URL NAMESPACE SUCCEEDED REASON STARTTIME COMPLETIONTIME pipelines-as-code-ci https://github.com/openshift-pipelines/pipelines-as-code pipelines-as-code-ci True Succeeded 59m 56m Using the tkn pac describe command, you can extract the status of the runs associated with your repository and its metadata. Notifications Pipelines as Code does not manage notifications. If you need to have notifications, use the finally feature of pipelines. Additional resources An example task to send Slack messages on success or failure An example of a pipeline run with finally tasks triggered on push events Additional resources An example of the git-clone task used for cloning private repositories 6.5. Cleaning up pipeline run using Pipelines as Code There can be many pipeline runs in a user namespace. By setting the max-keep-runs annotation, you can configure Pipelines as Code to retain a limited number of pipeline runs that matches an event. For example: ... pipelinesascode.tekton.dev/max-keep-runs: "<max_number>" 1 ... 1 Pipelines as Code starts cleaning up right after it finishes a successful execution, retaining only the maximum number of pipeline runs configured using the annotation. Note Pipelines as Code skips cleaning the running pipelines but cleans up the pipeline runs with an unknown status. Pipelines as Code skips cleaning a failed pull request. 6.6. Using incoming webhook with Pipelines as Code Using an incoming webhook URL and a shared secret, you can start a pipeline run in a repository. To use incoming webhooks, specify the following within the spec section of the Repository custom resource definition (CRD): The incoming webhook URL that Pipelines as Code matches. The Git provider and the user token. Currently, Pipelines as Code supports github , gitlab , and bitbucket-cloud . Note When using incoming webhook URLs in the context of GitHub app, you must specify the token. The target branches and a secret for the incoming webhook URL. Example: Repository CRD with incoming webhook apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: repo namespace: ns spec: url: "https://github.com/owner/repo" git_provider: type: github secret: name: "owner-token" incoming: - targets: - main secret: name: repo-incoming-secret type: webhook-url Example: The repo-incoming-secret secret for incoming webhook apiVersion: v1 kind: Secret metadata: name: repo-incoming-secret namespace: ns type: Opaque stringData: secret: <very-secure-shared-secret> To trigger a pipeline run located in the .tekton directory of a Git repository, use the following command: USD curl -X POST 'https://control.pac.url/incoming?secret=very-secure-shared-secret&repository=repo&branch=main&pipelinerun=target_pipelinerun' Pipelines as Code matches the incoming URL and treats it as a push event. However, Pipelines as Code does not report status of the pipeline runs triggered by this command. To get a report or a notification, add it directly with a finally task to your pipeline. Alternatively, you can inspect the Repository CRD with the tkn pac CLI tool. 6.7. Additional resources An example of the .tekton/ directory in the Pipelines as Code repository Creating applications using the Developer perspective | [
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-with-comment annotations: pipelinesascode.tekton.dev/task: \"github-add-comment\" spec: pipelineSpec: tasks: - name: add-sample-comment taskRef: name: github-add-comment params: - name: REQUEST_URL value: \"{{ repo_url }}/pull/{{ pull_request_number }}\" 1 - name: COMMENT_OR_FILE value: \"Pipelines as Code IS GREAT!\" - name: GITHUB_TOKEN_SECRET_NAME value: \"{{ git_auth_secret }}\" - name: GITHUB_TOKEN_SECRET_KEY value: \"git-provider-token\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-pr-main annotations: pipelinesascode.tekton.dev/on-target-branch: \"[main]\" 1 pipelinesascode.tekton.dev/on-event: \"[pull_request]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-push-on-main annotations: pipelinesascode.tekton.dev/on-target-branch: \"[refs/heads/main]\" 1 pipelinesascode.tekton.dev/on-event: \"[push]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-pkg-or-cli annotations: pipelinesascode.tekton.dev/on-path-changed: \"[\"pkg/*\", \"cli/**\"]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-docs-not-generated annotations: pipelinesascode.tekton.dev/on-path-changed: \"[\"docs/**\"]\" pipelinesascode.tekton.dev/on-path-changed-ignore: \"[\"docs/generated/**\"]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-main-not-docs annotations: pipelinesascode.tekton.dev/on-target-branch: \"[main]\" pipelinesascode.tekton.dev/on-event: \"[pull_request]\" pipelinesascode.tekton.dev/on-path-changed-ignore: \"[\"docs/**\"]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-bug-or-defect annotations: pipelinesascode.tekton.dev/on-label: \"[bug, defect]\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-comment annotations: pipelinesascode.tekton.dev/on-comment: \"^/merge-pr\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch == \"main\" && source_branch == \"wip\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-pathchanged annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && \"docs/\\*.md\".pathChanged() 1",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-downstream annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request && event_title.startsWith(\"[DOWNSTREAM]\")",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipeline-advanced-pr-not-experimental annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch != experimental\"",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: annotations: pipelinesascode.tekton.dev/on-cel-expression: | body.pull_request.base.ref == \"main\" && body.pull_request.user.login == \"superuser\" && body.action == \"synchronize\"",
"git commit --amend --no-edit && git push --force-with-lease",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: sample-pipeline annotations: pipelinesascode.tekton.dev/cancel-in-progress: \"true\"",
"tkn pac logs -n <my-pipeline-ci> -L 1",
"tkn pac logs -n <my-pipeline-ci> 1",
"This is a comment inside a pull request. /cancel",
"This is a comment inside a commit. /retest example_pipeline_run",
"<filename>:<line>:<column>: <error message>",
"oc get repo -n <pipelines-as-code-ci>",
"NAME URL NAMESPACE SUCCEEDED REASON STARTTIME COMPLETIONTIME pipelines-as-code-ci https://github.com/openshift-pipelines/pipelines-as-code pipelines-as-code-ci True Succeeded 59m 56m",
"pipelinesascode.tekton.dev/max-keep-runs: \"<max_number>\" 1",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: repo namespace: ns spec: url: \"https://github.com/owner/repo\" git_provider: type: github secret: name: \"owner-token\" incoming: - targets: - main secret: name: repo-incoming-secret type: webhook-url",
"apiVersion: v1 kind: Secret metadata: name: repo-incoming-secret namespace: ns type: Opaque stringData: secret: <very-secure-shared-secret>",
"curl -X POST 'https://control.pac.url/incoming?secret=very-secure-shared-secret&repository=repo&branch=main&pipelinerun=target_pipelinerun'"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_as_code/managing-pipeline-runs-pac_using-pac-resolver |
Chapter 5. ConfigMap [v1] | Chapter 5. ConfigMap [v1] Description ConfigMap holds configuration data for pods to consume. Type object 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources binaryData object (string) BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet. data object (string) Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process. immutable boolean Immutable, if set to true, ensures that data stored in the ConfigMap cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 5.2. API endpoints The following API endpoints are available: /api/v1/configmaps GET : list or watch objects of kind ConfigMap /api/v1/watch/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps DELETE : delete collection of ConfigMap GET : list or watch objects of kind ConfigMap POST : create a ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps GET : watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/namespaces/{namespace}/configmaps/{name} DELETE : delete a ConfigMap GET : read the specified ConfigMap PATCH : partially update the specified ConfigMap PUT : replace the specified ConfigMap /api/v1/watch/namespaces/{namespace}/configmaps/{name} GET : watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /api/v1/configmaps Table 5.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ConfigMap Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty 5.2.2. /api/v1/watch/configmaps Table 5.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /api/v1/namespaces/{namespace}/configmaps Table 5.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConfigMap Table 5.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.8. Body parameters Parameter Type Description body DeleteOptions schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ConfigMap Table 5.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ConfigMapList schema 401 - Unauthorized Empty HTTP method POST Description create a ConfigMap Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body ConfigMap schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 202 - Accepted ConfigMap schema 401 - Unauthorized Empty 5.2.4. /api/v1/watch/namespaces/{namespace}/configmaps Table 5.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ConfigMap. deprecated: use the 'watch' parameter with a list operation instead. Table 5.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.5. /api/v1/namespaces/{namespace}/configmaps/{name} Table 5.18. Global path parameters Parameter Type Description name string name of the ConfigMap namespace string object name and auth scope, such as for teams and projects Table 5.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConfigMap Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.21. Body parameters Parameter Type Description body DeleteOptions schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConfigMap Table 5.23. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConfigMap Table 5.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.25. Body parameters Parameter Type Description body Patch schema Table 5.26. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConfigMap Table 5.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.28. Body parameters Parameter Type Description body ConfigMap schema Table 5.29. HTTP responses HTTP code Reponse body 200 - OK ConfigMap schema 201 - Created ConfigMap schema 401 - Unauthorized Empty 5.2.6. /api/v1/watch/namespaces/{namespace}/configmaps/{name} Table 5.30. Global path parameters Parameter Type Description name string name of the ConfigMap namespace string object name and auth scope, such as for teams and projects Table 5.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ConfigMap. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/metadata_apis/configmap-v1 |
Chapter 7. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator | Chapter 7. Preparing an Agent-based installed cluster for the multicluster engine for Kubernetes Operator You can install the multicluster engine Operator and deploy a hub cluster with the Agent-based OpenShift Container Platform Installer. The following procedure is partially automated and requires manual steps after the initial cluster is deployed. 7.1. Prerequisites You have read the following documentation: Cluster lifecycle with multicluster engine operator overview . Persistent storage using local volumes . Using GitOps ZTP to provision clusters at the network far edge . Preparing to install with the Agent-based Installer . About disconnected installation mirroring . You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). If you are installing in a disconnected environment, you must have a configured local mirror registry for disconnected installation mirroring. 7.2. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while disconnected You can mirror the required OpenShift Container Platform container images, the multicluster engine Operator, and the Local Storage Operator (LSO) into your local mirror registry in a disconnected environment. Ensure that you note the local DNS hostname and port of your mirror registry. Note To mirror your OpenShift Container Platform image repository to your mirror registry, you can use either the oc adm release image or oc mirror command. In this procedure, the oc mirror command is used as an example. Procedure Create an <assets_directory> folder to contain valid install-config.yaml and agent-config.yaml files. This directory is used to store all the assets. To mirror an OpenShift Container Platform image repository, the multicluster engine, and the LSO, create a ImageSetConfiguration.yaml file with the following settings: Example ImageSetConfiguration.yaml kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - "amd64" channels: - name: stable-4.18 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8 1 Specify the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to receive the image set metadata. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel that contains the OpenShift Container Platform images for the version you are installing. 5 Set the Operator catalog that contains the OpenShift Container Platform images that you are installing. 6 Specify only certain Operator packages and channels to include in the image set. Remove this field to retrieve all packages in the catalog. 7 The multicluster engine packages and channels. 8 The LSO packages and channels. Note This file is required by the oc mirror command when mirroring content. To mirror a specific OpenShift Container Platform image repository, the multicluster engine, and the LSO, run the following command: USD oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port> Update the registry and certificate in the install-config.yaml file: Example imageContentSources.yaml imageContentSources: - source: "quay.io/openshift-release-dev/ocp-release" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images" - source: "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release" - source: "registry.redhat.io/ubi9" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/ubi9" - source: "registry.redhat.io/multicluster-engine" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine" - source: "registry.redhat.io/rhel8" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/rhel8" - source: "registry.redhat.io/redhat" mirrors: - "<your-local-registry-dns-name>:<your-local-registry-port>/redhat" Additionally, ensure your certificate is present in the additionalTrustBundle field of the install-config.yaml . Example install-config.yaml additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE------- Important The oc mirror command creates a folder called oc-mirror-workspace with several outputs. This includes the imageContentSourcePolicy.yaml file that identifies all the mirrors you need for OpenShift Container Platform and your selected Operators. Generate the cluster manifests by running the following command: USD openshift-install agent create cluster-manifests This command updates the cluster manifests folder to include a mirror folder that contains your mirror configuration. 7.3. Preparing an Agent-based cluster deployment for the multicluster engine for Kubernetes Operator while connected Create the required manifests for the multicluster engine Operator, the Local Storage Operator (LSO), and to deploy an agent-based OpenShift Container Platform cluster as a hub cluster. Procedure Create a sub-folder named openshift in the <assets_directory> folder. This sub-folder is used to store the extra manifests that will be applied during the installation to further customize the deployed cluster. The <assets_directory> folder contains all the assets including the install-config.yaml and agent-config.yaml files. Note The installer does not validate extra manifests. For the multicluster engine, create the following manifests and save them in the <assets_directory>/openshift folder: Example mce_namespace.yaml apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: multicluster-engine Example mce_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine Example mce_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: "stable-2.3" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace Note You can install a distributed unit (DU) at scale with the Red Hat Advanced Cluster Management (RHACM) using the assisted installer (AI). These distributed units must be enabled in the hub cluster. The AI service requires persistent volumes (PVs), which are manually created. For the AI service, create the following manifests and save them in the <assets_directory>/openshift folder: Example lso_namespace.yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: "true" name: openshift-local-storage Example lso_operatorgroup.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage Example lso_subscription.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace Note After creating all the manifests, your filesystem must display as follows: Example Filesystem <assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml Create the agent ISO image by running the following command: USD openshift-install agent create image --dir <assets_directory> When the image is ready, boot the target machine and wait for the installation to complete. To monitor the installation, run the following command: USD openshift-install agent wait-for install-complete --dir <assets_directory> Note To configure a fully functional hub cluster, you must create the following manifests and manually apply them by running the command USD oc apply -f <manifest-name> . The order of the manifest creation is important and where required, the waiting condition is displayed. For the PVs that are required by the AI service, create the following manifests: apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem Use the following command to wait for the availability of the PVs, before applying the subsequent manifests: USD oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m Note Create a manifest for a multicluster engine instance. Example MultiClusterEngine.yaml apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {} Create a manifest to enable the AI service. Example agentserviceconfig.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Create a manifest to deploy subsequently spoke clusters. Example clusterimageset.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: "4.18" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.18.0-x86_64 Create a manifest to import the agent installed cluster (that hosts the multicluster engine and the Assisted Service) as the hub cluster. Example autoimport.yaml apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: "true" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true Wait for the managed cluster to be created. USD oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m Verification To confirm that the managed cluster installation is successful, run the following command: USD oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m Additional resources The Local Storage Operator | [
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 imageURL: <your-local-registry-dns-name>:<your-local-registry-port>/mirror/oc-mirror-metadata 3 skipTLS: true mirror: platform: architectures: - \"amd64\" channels: - name: stable-4.18 4 type: ocp additionalImages: - name: registry.redhat.io/ubi9/ubi:latest operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.18 5 packages: 6 - name: multicluster-engine 7 - name: local-storage-operator 8",
"oc mirror --dest-skip-tls --config ocp-mce-imageset.yaml docker://<your-local-registry-dns-name>:<your-local-registry-port>",
"imageContentSources: - source: \"quay.io/openshift-release-dev/ocp-release\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release-images\" - source: \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/openshift/release\" - source: \"registry.redhat.io/ubi9\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/ubi9\" - source: \"registry.redhat.io/multicluster-engine\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/multicluster-engine\" - source: \"registry.redhat.io/rhel8\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/rhel8\" - source: \"registry.redhat.io/redhat\" mirrors: - \"<your-local-registry-dns-name>:<your-local-registry-port>/redhat\"",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- zzzzzzzzzzz -----END CERTIFICATE-------",
"openshift-install agent create cluster-manifests",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: multicluster-engine",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: multicluster-engine-operatorgroup namespace: multicluster-engine spec: targetNamespaces: - multicluster-engine",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: multicluster-engine namespace: multicluster-engine spec: channel: \"stable-2.3\" name: multicluster-engine source: redhat-operators sourceNamespace: openshift-marketplace",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/cluster-monitoring: \"true\" name: openshift-local-storage",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"<assets_directory> ├─ install-config.yaml ├─ agent-config.yaml └─ /openshift ├─ mce_namespace.yaml ├─ mce_operatorgroup.yaml ├─ mce_subscription.yaml ├─ lso_namespace.yaml ├─ lso_operatorgroup.yaml └─ lso_subscription.yaml",
"openshift-install agent create image --dir <assets_directory>",
"openshift-install agent wait-for install-complete --dir <assets_directory>",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: assisted-service namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed storageClassDevices: - devicePaths: - /dev/vda - /dev/vdb storageClassName: assisted-service volumeMode: Filesystem",
"oc wait localvolume -n openshift-local-storage assisted-service --for condition=Available --timeout 10m",
"The `devicePath` is an example and may vary depending on the actual hardware configuration used.",
"apiVersion: multicluster.openshift.io/v1 kind: MultiClusterEngine metadata: name: multiclusterengine spec: {}",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: assisted-installer spec: databaseStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi filesystemStorage: storageClassName: assisted-service accessModes: - ReadWriteOnce resources: requests: storage: 10Gi",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: \"4.18\" spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.18.0-x86_64",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: labels: local-cluster: \"true\" cloud: auto-detect vendor: auto-detect name: local-cluster spec: hubAcceptsClient: true",
"oc wait -n multicluster-engine managedclusters local-cluster --for condition=ManagedClusterJoined=True --timeout 10m",
"oc get managedcluster NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<your cluster url>:6443 True True 77m"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_an_on-premise_cluster_with_the_agent-based_installer/preparing-an-agent-based-installed-cluster-for-the-multicluster-engine-for-kubernetes |
A.11. Optional Workaround to Allow for Graceful Shutdown | A.11. Optional Workaround to Allow for Graceful Shutdown The libvirt-guests service has parameter settings that can be configured to assure that the guest can shutdown properly. It is a package that is a part of the libvirt installation and is installed by default. This service automatically saves guests to the disk when the host shuts down, and restores them to their pre-shutdown state when the host reboots. By default, this setting is set to suspend the guest. If you want the guest to be gracefully shutdown, you will need to change one of the parameters of the libvirt-guests configuration file. Procedure A.5. Changing the libvirt-guests service parameters to allow for the graceful shutdown of guests The procedure described here allows for the graceful shutdown of guest virtual machines when the host physical machine is stuck, powered off, or needs to be restarted. Open the configuration file The configuration file is located in /etc/sysconfig/libvirt-guests . Edit the file, remove the comment mark (#) and change the ON_SHUTDOWN=suspend to ON_SHUTDOWN=shutdown . Remember to save the change. URIS - checks the specified connections for a running guest. The Default setting functions in the same manner as virsh does when no explicit URI is set In addition, one can explicitly set the URI from /etc/libvirt/libvirt.conf . Note that when using the libvirt configuration file default setting, no probing will be used. ON_BOOT - specifies the action to be done to / on the guests when the host boots. The start option starts all guests that were running prior to shutdown regardless on their autostart settings. The ignore option will not start the formally running guest on boot, however, any guest marked as autostart will still be automatically started by libvirtd . The START_DELAY - sets a delay interval in between starting up the guests. This time period is set in seconds. Use the 0 time setting to make sure there is no delay and that all guests are started simultaneously. ON_SHUTDOWN - specifies the action taken when a host shuts down. Options that can be set include: suspend which suspends all running guests using virsh managedsave and shutdown which shuts down all running guests. It is best to be careful with using the shutdown option as there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest that just needs a longer time to shutdown. When setting the ON_SHUTDOWN=shutdown , you must also set SHUTDOWN_TIMEOUT to a value suitable for the guests. PARALLEL_SHUTDOWN Dictates that the number of guests on shutdown at any time will not exceed number set in this variable and the guests will be suspended concurrently. If set to 0 , then guests are not shutdown concurrently. Number of seconds to wait for a guest to shut down. If SHUTDOWN_TIMEOUT is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If SHUTDOWN_TIMEOUT is set to 0 , then there is no timeout (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). BYPASS_CACHE can have 2 values, 0 to disable and 1 to enable. If enabled it will by-pass the file system cache when guests are restored. Note that setting this may effect performance and may cause slower operation for some file systems. Start libvirt-guests service If you have not started the service, start the libvirt-guests service. Do not restart the service as this will cause all running guest virtual machines to shutdown. | [
"vi /etc/sysconfig/libvirt-guests URIs to check for running guests example: URIS='default xen:/// vbox+tcp://host/system lxc:///' #URIS=default action taken on host boot - start all guests which were running on shutdown are started on boot regardless on their autostart settings - ignore libvirt-guests init script won't start any guest on boot, however, guests marked as autostart will still be automatically started by libvirtd #ON_BOOT=start Number of seconds to wait between each guest start. Set to 0 to allow parallel startup. #START_DELAY=0 action taken on host shutdown - suspend all running guests are suspended using virsh managedsave - shutdown all running guests are asked to shutdown. Please be careful with this settings since there is no way to distinguish between a guest which is stuck or ignores shutdown requests and a guest which just needs a long time to shutdown. When setting ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a value suitable for your guests. ON_SHUTDOWN=shutdown If set to non-zero, shutdown will suspend guests concurrently. Number of guests on shutdown at any time will not exceed number set in this variable. #PARALLEL_SHUTDOWN=0 Number of seconds we're willing to wait for a guest to shut down. If parallel shutdown is enabled, this timeout applies as a timeout for shutting down all guests on a single URI defined in the variable URIS. If this is 0, then there is no time out (use with caution, as guests might not respond to a shutdown request). The default value is 300 seconds (5 minutes). #SHUTDOWN_TIMEOUT=300 If non-zero, try to bypass the file system cache when saving and restoring guests, even though this may give slower operation for some file systems. #BYPASS_CACHE=0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-Manipulating_the_libvirt_guests_configuration_settings |
Release Notes for Red Hat build of Apache Camel Extensions for Quarkus 2.13 | Release Notes for Red Hat build of Apache Camel Extensions for Quarkus 2.13 Red Hat build of Apache Camel Extensions for Quarkus 2.13 What's new in Red Hat build of Apache Camel Extensions for Quarkus Red Hat build of Apache Camel Extensions for Quarkus Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_extensions_for_quarkus/2.13/html/release_notes_for_red_hat_build_of_apache_camel_extensions_for_quarkus_2.13/index |
Chapter 5. Updated boot images | Chapter 5. Updated boot images The Machine Config Operator (MCO) uses a boot image to start a Red Hat Enterprise Linux CoreOS (RHCOS) node. By default, OpenShift Container Platform does not manage the boot image. This means that the boot image in your cluster is not updated along with your cluster. For example, if your cluster was originally created with OpenShift Container Platform 4.12, the boot image that the cluster uses to create nodes is the same 4.12 version, even if your cluster is at a later version. If the cluster is later upgraded to 4.13 or later, new nodes continue to scale with the same 4.12 image. This process could cause the following issues: Extra time to start nodes Certificate expiration issues Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and as a Technology Preview feature for Amazon Web Services (AWS) clusters. It is not supported for clusters managed by the Cluster CAPI Operator. Important The updating boot image feature for AWS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . If you are not using the default user data secret, named worker-user-data , in your machine set, or you have modified the worker-user-data secret, you should not use managed boot image updates. This is because the Machine Config Operator (MCO) updates the machine set to use a managed version of the secret. By using the managed boot images feature, you are giving up the capability to customize the secret stored in the machine set object. To view the current boot image used in your cluster, examine a machine set: Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1 # ... 1 This boot image is the same as the originally-installed OpenShift Container Platform version, in this example OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. 5.1. Configuring updated boot images By default, OpenShift Container Platform does not manage the boot image. You can configure your cluster to update the boot image whenever you update your cluster by modifying the MachineConfiguration object. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and as a Technology Preview feature for Amazon Web Services (AWS) clusters. It is not supported for clusters managed by the Cluster CAPI Operator. Important The updating boot image feature for AWS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Edit the MachineConfiguration object, named cluster , to enable the updating of boot images by running the following command: USD oc edit MachineConfiguration cluster Optional: Configure the boot image update feature for all the machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2 1 Activates the boot image update feature. 2 Specifies that all the machine sets in the cluster are to be updated. Optional: Configure the boot image update feature for specific machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: "true" 2 1 Activates the boot image update feature. 2 Specifies that any machine set with this label is to be updated. Tip If an appropriate label is not present on the machine set, add a key/value pair by running a command similar to following: Verification View the current state of the boot image updates by viewing the machine configuration object: USD oc get machineconfiguration cluster -n openshift-machine-api -o yaml Example machine set with the boot image reference kind: MachineConfiguration metadata: name: cluster # ... status: conditions: - lastTransitionTime: "2024-09-09T13:51:37Z" 1 message: Reconciled 1 of 2 MAPI MachineSets | Reconciled 0 of 0 CAPI MachineSets | Reconciled 0 of 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: "True" type: BootImageUpdateProgressing - lastTransitionTime: "2024-09-09T13:51:37Z" 2 message: 0 Degraded MAPI MachineSets | 0 Degraded CAPI MachineSets | 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: "False" type: BootImageUpdateDegraded 1 Status of the boot image update. Cluster CAPI Operator machine sets and machine deployments are not currently supported for boot image updates. 2 Indicates if any boot image updates failed. If any of the updates fail, the Machine Config Operator is degraded. In this case, manual intervention might be required. Get the boot image version by running the following command: USD oc get machinesets <machineset_name> -n openshift-machine-api -o yaml Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: "true" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1 # ... 1 This boot image is the same as the current OpenShift Container Platform version. Additional resources Enabling features using feature gates 5.2. Disabling updated boot images To disable the updated boot image feature, edit the MachineConfiguration object to remove the managedBootImages stanza. If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new OpenShift Container Platform version in the future. Procedure Disable updated boot images by editing the MachineConfiguration object: USD oc edit MachineConfiguration cluster Remove the managedBootImages stanza: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 1 Remove the entire stanza to disable updated boot images. | [
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2",
"oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api",
"oc get machineconfiguration cluster -n openshift-machine-api -o yaml",
"kind: MachineConfiguration metadata: name: cluster status: conditions: - lastTransitionTime: \"2024-09-09T13:51:37Z\" 1 message: Reconciled 1 of 2 MAPI MachineSets | Reconciled 0 of 0 CAPI MachineSets | Reconciled 0 of 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: \"True\" type: BootImageUpdateProgressing - lastTransitionTime: \"2024-09-09T13:51:37Z\" 2 message: 0 Degraded MAPI MachineSets | 0 Degraded CAPI MachineSets | 0 CAPI MachineDeployments reason: BootImageUpdateConfigurationAdded status: \"False\" type: BootImageUpdateDegraded",
"oc get machinesets <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1",
"oc edit MachineConfiguration cluster",
"apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_configuration/mco-update-boot-images |
Chapter 6. Logging in to IdM in the Web UI: Using a Kerberos ticket | Chapter 6. Logging in to IdM in the Web UI: Using a Kerberos ticket Learn more about how to configure your environment to enable Kerberos login to the IdM Web UI and accessing IdM using Kerberos authentication. Prerequisites Installed IdM server in your network environment For details, see Installing Identity Management in Red Hat Enterprise Linux 8 6.1. Kerberos authentication in Identity Management Identity Management (IdM) uses the Kerberos protocol to support single sign-on. Single sign-on authentication allows you to provide the correct user name and password only once, and you can then access Identity Management services without the system prompting for credentials again. The IdM server provides Kerberos authentication immediately after the installation if the DNS and certificate settings have been configured properly. For details, see Installing Identity Management . To use Kerberos authentication on hosts, install: The IdM client For details, see Preparing the system for Identity Management client installation . The krb5conf package 6.2. Using kinit to log in to IdM manually Follow this procedure to use the kinit utility to authenticate to an Identity Management (IdM) environment manually. The kinit utility obtains and caches a Kerberos ticket-granting ticket (TGT) on behalf of an IdM user. Note Only use this procedure if you have destroyed your initial Kerberos TGT or if it has expired. As an IdM user, when logging onto your local machine you are also automatically logging in to IdM. This means that after logging in, you are not required to use the kinit utility to access IdM resources. Procedure To log in to IdM Under the user name of the user who is currently logged in on the local system, use kinit without specifying a user name. For example, if you are logged in as example_user on the local system: If the user name of the local user does not match any user entry in IdM, the authentication attempt fails: Using a Kerberos principal that does not correspond to your local user name, pass the required user name to the kinit utility. For example, to log in as the admin user: Verification To verify that the login was successful, use the klist utility to display the cached TGT. In the following example, the cache contains a ticket for the example_user principal, which means that on this particular host, only example_user is currently allowed to access IdM services: 6.3. Configuring the browser for Kerberos authentication To enable authentication with a Kerberos ticket, you may need a browser configuration. The following steps help you to support Kerberos negotiation for accessing the IdM domain. Each browser supports Kerberos in a different way and needs different set up. The IdM Web UI includes guidelines for the following browsers: Firefox Chrome Procedure Open the IdM Web UI login dialog in your web browser. Click the link for browser configuration on the Web UI login screen. Follow the steps on the configuration page. After the setup, turn back to the IdM Web UI and click Log in . 6.4. Logging in to the web UI using a Kerberos ticket Follow this procedure to log in to the IdM Web UI using a Kerberos ticket-granting ticket (TGT). The TGT expires at a predefined time. The default time interval is 24 hours and you can change it in the IdM Web UI. After the time interval expires, you need to renew the ticket: Using the kinit command. Using IdM login credentials in the Web UI login dialog. Procedure Open the IdM Web UI. If Kerberos authentication works correctly and you have a valid ticket, you will be automatically authenticated and the Web UI opens. If the ticket is expired, it is necessary to authenticate yourself with credentials first. However, time the IdM Web UI will open automatically without opening the login dialog. If you see an error message Authentication with Kerberos failed , verify that your browser is configured for Kerberos authentication. See Configuring the browser for Kerberos authentication . 6.5. Configuring an external system for Kerberos authentication Follow this procedure to configure an external system so that Identity Management (IdM) users can log in to IdM from the external system using their Kerberos credentials. Enabling Kerberos authentication on external systems is especially useful when your infrastructure includes multiple realms or overlapping domains. It is also useful if the system has not been enrolled into any IdM domain through ipa-client-install . To enable Kerberos authentication to IdM from a system that is not a member of the IdM domain, define an IdM-specific Kerberos configuration file on the external system. Prerequisites The krb5-workstation package is installed on the external system. To find out whether the package is installed, use the following CLI command: Procedure Copy the /etc/krb5.conf file from the IdM server to the external system. For example: Warning Do not overwrite the existing krb5.conf file on the external system. On the external system, set the terminal session to use the copied IdM Kerberos configuration file: The KRB5_CONFIG variable exists only temporarily until you log out. To prevent this loss, export the variable with a different file name. Copy the Kerberos configuration snippets from the /etc/krb5.conf.d/ directory to the external system. Configure the browser on the external system, as described in Configuring the browser for Kerberos authentication . Users on the external system can now use the kinit utility to authenticate against the IdM server. 6.6. Web UI login for Active Directory users To enable Web UI login for Active Directory users, define an ID override for each Active Directory user in the Default Trust View . For example: Additional resources Using ID views for Active Directory users | [
"[example_user@server ~]USD kinit Password for [email protected]: [example_user@server ~]USD",
"[example_user@server ~]USD kinit kinit: Client '[email protected]' not found in Kerberos database while getting initial credentials",
"[example_user@server ~]USD kinit admin Password for [email protected]: [example_user@server ~]USD",
"klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11/10/2019 08:35:45 11/10/2019 18:35:45 krbtgt/[email protected]",
"yum list installed krb5-workstation Installed Packages krb5-workstation.x86_64 1.16.1-19.el8 @BaseOS",
"scp /etc/krb5.conf root@ externalsystem.example.com :/etc/krb5_ipa.conf",
"export KRB5_CONFIG=/etc/krb5_ipa.conf",
"[admin@server ~]USD ipa idoverrideuser-add 'Default Trust View' ad_user @ ad.example.com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/logging-in-to-ipa-in-the-web-ui-using-a-kerberos-ticket_configuring-and-managing-idm |
3.10. Virtual Network Interface Cards | 3.10. Virtual Network Interface Cards Virtual network interface cards (vNICs) are virtual network interfaces that are based on the physical NICs of a host. Each host can have multiple NICs, and each NIC can be a base for multiple vNICs. When you attach a vNIC to a virtual machine, the Red Hat Virtualization Manager creates several associations between the virtual machine to which the vNIC is being attached, the vNIC itself, and the physical host NIC on which the vNIC is based. Specifically, when a vNIC is attached to a virtual machine, a new vNIC and MAC address are created on the physical host NIC on which the vNIC is based. Then, the first time the virtual machine starts after that vNIC is attached, libvirt assigns the vNIC a PCI address. The MAC address and PCI address are then used to obtain the name of the vNIC (for example, eth0 ) in the virtual machine. The process for assigning MAC addresses and associating those MAC addresses with PCI addresses is slightly different when creating virtual machines based on templates or snapshots: If PCI addresses have already been created for a template or snapshot, the vNICs on virtual machines created based on that template or snapshot are ordered in accordance with those PCI addresses. MAC addresses are then allocated to the vNICs in that order. If PCI addresses have not already been created for a template, the vNICs on virtual machines created based on that template are ordered alphabetically. MAC addresses are then allocated to the vNICs in that order. If PCI addresses have not already been created for a snapshot, the Red Hat Virtualization Manager allocates new MAC addresses to the vNICs on virtual machines based on that snapshot. Once created, vNICs are added to a network bridge device. The network bridge devices connect virtual machines to virtual logical networks. Running the ip addr show command on a virtualization host shows all of the vNICs that are associated with virtual machines on that host. Also visible are any network bridges that have been created to back logical networks, and any NICs used by the host. [root@rhev-host-01 ~]# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:21:6b:cc:14:6c brd ff:ff:ff:ff:ff:ff 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 4a:d5:52:c2:7f:4b brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 7: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 9: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 11: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet 10.64.32.134/23 brd 10.64.33.255 scope global ovirtmgmt inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever The console output from the command shows several devices: one loop back device ( lo ), one Ethernet device ( eth0 ), one wireless device ( wlan0 ), one VDSM dummy device ( ;vdsmdummy; ), five bond devices ( bond0 , bond4 , bond1 , bond2 , bond3 ), and one network bridge ( ovirtmgmt ). vNICs are all members of a network bridge device and logical network. Bridge membership can be displayed using the brctl show command: [root@rhev-host-01 ~]# brctl show bridge name bridge id STP enabled interfaces ovirtmgmt 8000.e41f13b7fdd4 no vnet002 vnet001 vnet000 eth0 The console output from the brctl show command shows that the virtio vNICs are members of the ovirtmgmt bridge. All of the virtual machines that the vNICs are associated with are connected to the ovirtmgmt logical network. The eth0 NIC is also a member of the ovirtmgmt bridge. The eth0 device is cabled to a switch that provides connectivity beyond the host. | [
"ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 00:21:6b:cc:14:6c brd ff:ff:ff:ff:ff:ff 5: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 4a:d5:52:c2:7f:4b brd ff:ff:ff:ff:ff:ff 6: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 7: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 9: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 11: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:21:86:a2:85:cd brd ff:ff:ff:ff:ff:ff inet 10.64.32.134/23 brd 10.64.33.255 scope global ovirtmgmt inet6 fe80::221:86ff:fea2:85cd/64 scope link valid_lft forever preferred_lft forever",
"brctl show bridge name bridge id STP enabled interfaces ovirtmgmt 8000.e41f13b7fdd4 no vnet002 vnet001 vnet000 eth0"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/virtual_network_interface_controller_vnic |
Chapter 1. Introduction to Red Hat Single Sign-On for OpenShift | Chapter 1. Introduction to Red Hat Single Sign-On for OpenShift 1.1. What is Red Hat Single Sign-On? Red Hat Single Sign-On is an integrated sign-on solution available as a Red Hat JBoss Middleware for OpenShift containerized image. The Red Hat Single Sign-On for OpenShift image provides an authentication server for users to centrally log in, log out, register, and manage user accounts for web applications, mobile applications, and RESTful web services. Red Hat Single Sign-On for OpenShift is available on the following platforms: x86_64, IBM Z, and IBM Power Systems. 1.2. Comparison: Red Hat Single Sign-On for OpenShift Image versus Red Hat Single Sign-On The Red Hat Single Sign-On for OpenShift image version number 7.6.11 is based on Red Hat Single Sign-On 7.6.11. There are some important differences in functionality between the Red Hat Single Sign-On for OpenShift image and Red Hat Single Sign-On that should be considered: The Red Hat Single Sign-On for OpenShift image includes all of the functionality of Red Hat Single Sign-On. In addition, the Red Hat Single Sign-On-enabled JBoss EAP image automatically handles OpenID Connect or SAML client registration and configuration for .war deployments that contain <auth-method>KEYCLOAK</auth-method> or <auth-method>KEYCLOAK-SAML</auth-method> in their respective web.xml files. 1.3. Templates for use with this software Red Hat offers multiple OpenShift application templates using the Red Hat Single Sign-On for OpenShift image version number 7.6.11. These templates define the resources needed to develop Red Hat Single Sign-On 7.6.11 server based deployment. The templates can mainly be split into two categories: passthrough templates and reencryption templates. Some other miscellaneous templates also exist. 1.3.1. Passthrough templates These templates require that HTTPS, JGroups keystores, and a truststore for the Red Hat Single Sign-On server exist beforehand. They secure the TLS communication using passthrough TLS termination. sso76-ocp3-https , sso76-ocp4-https : Red Hat Single Sign-On 7.6.11 backed by internal H2 database on the same pod. sso76-ocp3-postgresql , sso76-ocp4-postgresql : Red Hat Single Sign-On 7.6.11 backed by ephemeral PostgreSQL database on a separate pod. sso76-ocp3-postgresql-persistent , sso76-ocp4-postgresql-persistent : Red Hat Single Sign-On 7.6.11 backed by persistent PostgreSQL database on a separate pod. Note Templates for using Red Hat Single Sign-On with MySQL / MariaDB databases have been removed and are not available since Red Hat Single Sign-On version 7.4. 1.3.2. Re-encryption templates Separate re-encryption templates exist for OpenShift 3.x and for OpenShift 4.x 1.3.2.1. OpenShift 3.x The OpenShift 3.x templates use the service-ca.crt CA bundle file as part of the Service Serving Certificate Secrets to generate TLS certificates and keys for serving secure content. The Red Hat Single Sign-On truststore is also created automatically, containing the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt CA certificate file, which is used to sign the certificate for the HTTPS keystore. The truststore for the Red Hat Single Sign-On server is pre-populated with the all known, trusted CA certificate files found in the Java system path. These templates secure the TLS communication using re-encryption TLS termination. The JGroups cluster traffic is authenticated using the AUTH protocol and encrypted using the ASYM_ENCRYPT protocol. sso76-ocp3-x509-https : Red Hat Single Sign-On 7.6.11 with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by internal H2 database. sso76-ocp3-x509-postgresql-persistent : Red Hat Single Sign-On 7.6.11 with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by persistent PostgreSQL database. 1.3.2.2. OpenShift 4.x The OpenShift 4.x templates use the internal service serving x509 certificate secrets to automatically create the HTTPS keystore used for serving secure content. These templates use a new service CA bundle that contains the service.beta.openshift.io/inject-cabundle=true ConfigMap definition. The truststore for the Red Hat Single Sign-On server is pre-populated with the all known, trusted CA certificate files found in the Java system path. These templates secure the TLS communication using re-encryption TLS termination. The JGroups cluster traffic is authenticated using the AUTH protocol and encrypted using the ASYM_ENCRYPT protocol. sso76-ocp4-x509-https : Red Hat Single Sign-On 7.6.11 with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by internal H2 database. The ASYM_ENCRYPT JGroups protocol is used for encryption of cluster traffic. sso76-ocp4-x509-postgresql-persistent : Red Hat Single Sign-On 7.6.11 with auto-generated HTTPS keystore and Red Hat Single Sign-On truststore, backed by persistent PostgreSQL database. The ASYM_ENCRYPT JGroups protocol is used for encryption of cluster traffic. 1.3.3. Other templates Other templates that integrate with Red Hat Single Sign-On are also available: eap64-sso-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Enterprise Application Platform 6.4. eap71-sso-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Enterprise Application Platform 7.1. datavirt63-secure-s2i : Red Hat Single Sign-On-enabled Red Hat JBoss Data Virtualization 6.3. These templates contain environment variables specific to Red Hat Single Sign-On that enable automatic Red Hat Single Sign-On client registration when deployed. Additional resources Automatic and Manual Red Hat Single Sign-On Client Registration Methods Passthrough TLS termination, OpenShift 3.11 Re-encryption TLS termination, OpenShift 3.11 Secured Routes, OpenShift 4.11 1.4. Version compatibility and support For details about OpenShift image version compatibility, see the Supported Configurations page. Note The Red Hat Single Sign-On for OpenShift image versions between 7.0 and 7.5 are deprecated and they will no longer receive updates of image and application templates. To deploy new applications, use the 7.6 version of the Red Hat Single Sign-On for OpenShift image along with the application templates specific to this image version. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/red_hat_single_sign-on_for_openshift/introduction_to_red_hat_single_sign_on_for_openshift |
Chapter 3. Using connections | Chapter 3. Using connections 3.1. Adding a connection to your data science project You can enhance your data science project by adding a connection that contains the configuration parameters needed to connect to a data source or sink. When you want to work with a very large data sets, you can store your data in an S3-compatible object storage bucket or a URI-based repository, so that you do not fill up your local storage. You also have the option of associating the connection with an existing workbench that does not already have a connection. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that you can add a connection to. You have access to S3-compatible object storage or a URI-based repository. If you intend to add the connection to an existing workbench, you have saved any data in the workbench to avoid losing work. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to add a connection to. A project details page opens. Click the Connections tab. Click Add connection . In the Add connection modal, select a Connection type . The S3 compatible object storage and URI options are pre-installed connection types. Additional options might be available if your OpenShift AI administrator added them. The Add connection form opens with fields specific to the connection type that you selected. Enter a unique name for the connection. A resource name is generated based on the name of the connection. A resource name is the label for the underlying resource in OpenShift. Optional: Edit the default resource name. Note that you cannot change the resource name after you create the connection. Optional: Provide a description of the connection. Complete the form depending on the connection type that you selected. For example: If you selected S3 compatible object storage as the connection type, configure the connection details: In the Access key field, enter the access key ID for the S3-compatible object storage provider. In the Secret key field, enter the secret access key for the S3-compatible object storage account that you specified. In the Endpoint field, enter the endpoint of your S3-compatible object storage bucket. Note Make sure to use the appropriate endpoint format. Improper formatting might cause connection errors or restrict access to storage resources. For more information about how to format object storage endpoints, see Overview of object storage endpoints . In the Region field, enter the default region of your S3-compatible object storage account. In the Bucket field, enter the name of your S3-compatible object storage bucket. Click Create . If you selected URI in the preceding step, in the URI field, enter the Uniform Resource Identifier (URI). Click Add connection . Verification The connection that you added appears on the Connections tab for the project. 3.2. Updating a connection You can edit the configuration of an existing connection as described in this procedure. Note Any changes that you make to a connection are not applied to dependent resources (for example, a workbench) until those resources are restarted, redeployed, or otherwise regenerated. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project, created a workbench, and you have defined a connection. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains the connection that you want to change. A project details page opens. Click the Connections tab. Click the action menu ( ... ) beside the connection that you want to change and then click Edit . The Edit connection form opens. Make your changes. Click Save . Verification The updated connection is displayed on the Connections tab for the project. 3.3. Deleting a connection You can delete connections that are no longer relevant to your data science project. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project with a connection. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to delete the connection from. A project details page opens. Click the Connections tab. Click the action menu ( ... ) beside the connection that you want to delete and then click Delete connection . The Delete connection dialog opens. Enter the name of the connection in the text field to confirm that you intend to delete it. Click Delete connection . Verification The connection that you deleted is no longer displayed on the Connections page for the project. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/working_on_data_science_projects/using-connections_projects |
Chapter 7. Scheduling Windows container workloads | Chapter 7. Scheduling Windows container workloads You can schedule Windows workloads to Windows compute nodes. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a Windows container as the OS image. You have created a Windows compute machine set. 7.1. Windows pod placement Before deploying your Windows workloads to the cluster, you must configure your Windows node scheduling so pods are assigned correctly. Since you have a machine hosting your Windows node, it is managed the same as a Linux-based node. Likewise, scheduling a Windows pod to the appropriate Windows node is completed similarly, using mechanisms like taints, tolerations, and node selectors. With multiple operating systems, and the ability to run multiple Windows OS variants in the same cluster, you must map your Windows pods to a base Windows OS variant by using a RuntimeClass object. For example, if you have multiple Windows nodes running on different Windows Server container versions, the cluster could schedule your Windows pods to an incompatible Windows OS variant. You must have RuntimeClass objects configured for each Windows OS variant on your cluster. Using a RuntimeClass object is also recommended if you have only one Windows OS variant available in your cluster. For more information, see Microsoft's documentation on Host and container version compatibility . Also, it is recommended that you set the spec.os.name.windows parameter in your workload pods. The Windows Machine Config Operator (WMCO) uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). Currently, this parameter has no effect on pod scheduling. For more information about this parameter, see the Kubernetes Pods documentation . Important The container base image must be the same Windows OS version and build number that is running on the node where the conainer is to be scheduled. Also, if you upgrade the Windows nodes from one version to another, for example going from 20H2 to 2022, you must upgrade your container base image to match the new version. For more information, see Windows container version compatibility . Additional resources Controlling pod placement using the scheduler Controlling pod placement using node taints Placing pods on specific nodes using node selectors 7.2. Creating a RuntimeClass object to encapsulate scheduling mechanisms Using a RuntimeClass object simplifies the use of scheduling mechanisms like taints and tolerations; you deploy a runtime class that encapsulates your taints and tolerations and then apply it to your pods to schedule them to the appropriate node. Creating a runtime class is also necessary in clusters that support multiple operating system variants. Procedure Create a RuntimeClass object YAML file. For example, runtime-class.yaml : apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: "windows" - effect: NoSchedule key: os operator: Equal value: "Windows" 1 Specify the RuntimeClass object name, which is defined in the pods you want to be managed by this runtime class. 2 Specify labels that must be present on nodes that support this runtime class. Pods using this runtime class can only be scheduled to a node matched by this selector. The node selector of the runtime class is merged with the existing node selector of the pod. Any conflicts prevent the pod from being scheduled to the node. For Windows 2019, specify the node.kubernetes.io/windows-build: '10.0.17763' label. For Windows 2022, specify the node.kubernetes.io/windows-build: '10.0.20348' label. 3 Specify tolerations to append to pods, excluding duplicates, running with this runtime class during admission. This combines the set of nodes tolerated by the pod and the runtime class. Create the RuntimeClass object: USD oc create -f <file-name>.yaml For example: USD oc create -f runtime-class.yaml Apply the RuntimeClass object to your pod to ensure it is scheduled to the appropriate operating system variant: apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1 # ... 1 Specify the runtime class to manage the scheduling of your pod. 7.3. Sample Windows container workload deployment You can deploy Windows container workloads to your cluster once you have a Windows compute node available. Note This sample deployment is provided for reference only. Example Service object apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer Example Deployment object apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: "ContainerAdministrator" os: name: "windows" runtimeClassName: windows2019 3 1 Specify the container image to use: mcr.microsoft.com/powershell:<tag> or mcr.microsoft.com/windows/servercore:<tag> . The container image must match the Windows version running on the node. For Windows 2019, use the ltsc2019 tag. For Windows 2022, use the ltsc2022 tag. 2 Specify the commands to execute on the container. For the mcr.microsoft.com/powershell:<tag> container image, you must define the command as pwsh.exe . For the mcr.microsoft.com/windows/servercore:<tag> container image, you must define the command as powershell.exe . 3 Specify the runtime class you created for the Windows operating system variant on your cluster. 7.4. Support for Windows CSI drivers Red Hat OpenShift support for Windows Containers installs CSI Proxy on all Windows nodes in the cluster. CSI Proxy is a plug-in that enables CSI drivers to perform storage operations on the node. To use persistent storage with Windows workloads, you must deploy a specific Windows CSI driver daemon set, as described in your storage provider's documentation. By default, the WMCO does not automatically create the Windows CSI driver daemon set. See the list of production drivers in the Kubernetes CSI Developer Documentation. Note Red Hat does not provide support for the third-party production drivers listed in the Kubernetes CSI Developer Documentation. 7.5. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines | [
"apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"windows\" - effect: NoSchedule key: os operator: Equal value: \"Windows\"",
"oc create -f <file-name>.yaml",
"oc create -f runtime-class.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1",
"apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer",
"apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" os: name: \"windows\" runtimeClassName: windows2019 3",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get machines"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/windows_container_support_for_openshift/scheduling-windows-workloads |
Chapter 1. Overview | Chapter 1. Overview Red Hat Single Sign-On is a single sign on solution for web apps and RESTful web services. The goal of Red Hat Single Sign-On is to make security simple so that it is easy for application developers to secure the apps and services they have deployed in their organization. Security features that developers normally have to write for themselves are provided out of the box and are easily tailorable to the individual requirements of your organization. Red Hat Single Sign-On provides customizable user interfaces for login, registration, administration, and account management. You can also use Red Hat Single Sign-On as an integration platform to hook it into existing LDAP and Active Directory servers. You can also delegate authentication to third party identity providers like Facebook and Google+. 1.1. Features Single-Sign On and Single-Sign Out for browser applications. OpenID Connect support. OAuth 2.0 support. SAML support. Identity Brokering - Authenticate with external OpenID Connect or SAML Identity Providers. Social Login - Enable login with Google, GitHub, Facebook, Twitter, and other social networks. User Federation - Sync users from LDAP and Active Directory servers. Kerberos bridge - Automatically authenticate users that are logged-in to a Kerberos server. Admin Console for central management of users, roles, role mappings, clients and configuration. Account Management console that allows users to centrally manage their account. Theme support - Customize all user facing pages to integrate with your applications and branding. Two-factor Authentication - Support for TOTP/HOTP via Google Authenticator or FreeOTP. Login flows - optional user self-registration, recover password, verify email, require password update, etc. Session management - Admins and users themselves can view and manage user sessions. Token mappers - Map user attributes, roles, etc. how you want into tokens and statements. Not-before revocation policies per realm, application and user. CORS support - Client adapters have built-in support for CORS. Client adapters for JavaScript applications, JBoss EAP, Fuse, etc. Supports any platform/language that has an OpenID Connect Relying Party library or SAML 2.0 Service Provider library. 1.2. How Does Security Work? Red Hat Single Sign-On is a separate server that you manage on your network. Applications are configured to point to and be secured by this server. Red Hat Single Sign-On uses open protocol standards like OpenID Connect or SAML 2.0 to secure your applications. Browser applications redirect a user's browser from the application to the Red Hat Single Sign-On authentication server where they enter their credentials. This is important because users are completely isolated from applications and applications never see a user's credentials. Applications instead are given an identity token or assertion that is cryptographically signed. These tokens can have identity information like username, address, email, and other profile data. They can also hold permission data so that applications can make authorization decisions. These tokens can also be used to make secure invocations on REST-based services. 1.3. Core Concepts and Terms There are some key concepts and terms you should be aware of before attempting to use Red Hat Single Sign-On to secure your web applications and REST services. users Users are entities that are able to log into your system. They can have attributes associated with themselves like email, username, address, phone number, and birth day. They can be assigned group membership and have specific roles assigned to them. authentication The process of identifying and validating a user. authorization The process of granting access to a user. credentials Credentials are pieces of data that Red Hat Single Sign-On uses to verify the identity of a user. Some examples are passwords, one-time-passwords, digital certificates, or even fingerprints. roles Roles identify a type or category of user. Admin , user , manager , and employee are all typical roles that may exist in an organization. Applications often assign access and permissions to specific roles rather than individual users as dealing with users can be too fine grained and hard to manage. user role mapping A user role mapping defines a mapping between a role and a user. A user can be associated with zero or more roles. This role mapping information can be encapsulated into tokens and assertions so that applications can decide access permissions on various resources they manage. composite roles A composite role is a role that can be associated with other roles. For example a superuser composite role could be associated with the sales-admin and order-entry-admin roles. If a user is mapped to the superuser role they also inherit the sales-admin and order-entry-admin roles. groups Groups manage groups of users. Attributes can be defined for a group. You can map roles to a group as well. Users that become members of a group inherit the attributes and role mappings that group defines. realms A realm manages a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control. clients Clients are entities that can request Red Hat Single Sign-On to authenticate a user. Most often, clients are applications and services that want to use Red Hat Single Sign-On to secure themselves and provide a single sign-on solution. Clients can also be entities that just want to request identity information or an access token so that they can securely invoke other services on the network that are secured by Red Hat Single Sign-On. client adapters Client adapters are plugins that you install into your application environment to be able to communicate and be secured by Red Hat Single Sign-On. Red Hat Single Sign-On has a number of adapters for different platforms that you can download. There are also third-party adapters you can get for environments that we don't cover. consent Consent is when you as an admin want a user to give permission to a client before that client can participate in the authentication process. After a user provides their credentials, Red Hat Single Sign-On will pop up a screen identifying the client requesting a login and what identity information is requested of the user. User can decide whether or not to grant the request. client scopes When a client is registered, you must define protocol mappers and role scope mappings for that client. It is often useful to store a client scope, to make creating new clients easier by sharing some common settings. This is also useful for requesting some claims or roles to be conditionally based on the value of scope parameter. Red Hat Single Sign-On provides the concept of a client scope for this. client role Clients can define roles that are specific to them. This is basically a role namespace dedicated to the client. identity token A token that provides identity information about the user. Part of the OpenID Connect specification. access token A token that can be provided as part of an HTTP request that grants access to the service being invoked on. This is part of the OpenID Connect and OAuth 2.0 specification. assertion Information about a user. This usually pertains to an XML blob that is included in a SAML authentication response that provided identity metadata about an authenticated user. service account Each client has a built-in service account which allows it to obtain an access token. direct grant A way for a client to obtain an access token on behalf of a user via a REST invocation. protocol mappers For each client you can tailor what claims and assertions are stored in the OIDC token or SAML assertion. You do this per client by creating and configuring protocol mappers. session When a user logs in, a session is created to manage the login session. A session contains information like when the user logged in and what applications have participated within single-sign on during that session. Both admins and users can view session information. user federation provider Red Hat Single Sign-On can store and manage users. Often, companies already have LDAP or Active Directory services that store user and credential information. You can point Red Hat Single Sign-On to validate credentials from those external stores and pull in identity information. identity provider An identity provider (IDP) is a service that can authenticate a user. Red Hat Single Sign-On is an IDP. identity provider federation Red Hat Single Sign-On can be configured to delegate authentication to one or more IDPs. Social login via Facebook or Google+ is an example of identity provider federation. You can also hook Red Hat Single Sign-On to delegate authentication to any other OpenID Connect or SAML 2.0 IDP. identity provider mappers When doing IDP federation you can map incoming tokens and assertions to user and session attributes. This helps you propagate identity information from the external IDP to your client requesting authentication. required actions Required actions are actions a user must perform during the authentication process. A user will not be able to complete the authentication process until these actions are complete. For example, an admin may schedule users to reset their passwords every month. An update password required action would be set for all these users. authentication flows Authentication flows are work flows a user must perform when interacting with certain aspects of the system. A login flow can define what credential types are required. A registration flow defines what profile information a user must enter and whether something like reCAPTCHA must be used to filter out bots. Credential reset flow defines what actions a user must do before they can reset their password. events Events are audit streams that admins can view and hook into. themes Every screen provided by Red Hat Single Sign-On is backed by a theme. Themes define HTML templates and stylesheets which you can override as needed. | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_administration_guide/overview |
Generating sos reports for technical support | Generating sos reports for technical support Red Hat Enterprise Linux 8 Gathering troubleshooting information from RHEL servers with the sos utility Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/generating_sos_reports_for_technical_support/index |
Chapter 10. Volume cloning | Chapter 10. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 10.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/volume-cloning_rhodf |
5.2. System Ports | 5.2. System Ports IdM uses a number of ports to communicate with its services. IdM clients require the same ports as IdM servers, except for port 7389. You do not have to keep port 7389 open and available for clients in most usual deployments. For the list of ports required by IdM and for information on how to make sure they are available, see Section 2.4.5, "System Ports" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/prereq-ports-clients |
Managing access and permissions | Managing access and permissions Red Hat Quay 3 Managing access and permissions Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/managing_access_and_permissions/index |
10.2. Encryption Ciphers | 10.2. Encryption Ciphers The encryption cipher is configurable on a per-attribute basis and must be selected by the administrator at the time encryption is enabled for an attribute. The following ciphers are supported: Advanced Encryption Standard (AES) Triple Data Encryption Standard (3DES) Note For strong encryption, Red Hat recommends using only AES ciphers. All ciphers are used in Cipher Block Chaining mode. Once the encryption cipher is set, it should not be changed without exporting and reimporting the data. | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/database_encryption-encryption_ciphers |
Chapter 1. Monitoring overview | Chapter 1. Monitoring overview 1.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to enable monitoring for user-defined projects . A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify cluster administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can view and manage metrics , alerts , and review monitoring dashboards . OpenShift Container Platform also provides access to third-party interfaces , such as Prometheus, Alertmanager, and Grafana. After installing OpenShift Container Platform 4.7, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. You can also expose custom application metrics for horizontal pod autoscaling. As a cluster administrator, you can find answers to common problems such as user metrics unavailability and Prometheus consuming a lot of disk space in troubleshooting monitoring issues . 1.2. Understanding the monitoring stack The OpenShift Container Platform monitoring stack is based on the Prometheus open source project and its wider ecosystem. The monitoring stack includes the following: Default platform monitoring components . A set of platform monitoring components are installed in the openshift-monitoring project by default during an OpenShift Container Platform installation. This provides monitoring for core OpenShift Container Platform components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters. These components are illustrated in the Installed by default section in the following diagram. Components for monitoring user-defined projects . After optionally enabling monitoring for user-defined projects, additional monitoring components are installed in the openshift-user-workload-monitoring project. This provides monitoring for user-defined projects. These components are illustrated in the User section in the following diagram. 1.2.1. Default monitoring components By default, the OpenShift Container Platform 4.7 monitoring stack includes these components: Table 1.1. Default monitoring stack components Component Description Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances, the Thanos Querier, the Telemeter Client, and metrics targets and ensures that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO). Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus instances and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus Adapter The Prometheus Adapter (PA in the preceding diagram) translates Kubernetes node and pod queries for use in Prometheus. The resource metrics that are translated include CPU and memory utilization metrics. The Prometheus Adapter exposes the cluster resource metrics API for horizontal pod autoscaling. The Prometheus Adapter is also used by the oc adm top nodes and oc adm top pods commands. Alertmanager The Alertmanager service handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. kube-state-metrics agent The kube-state-metrics exporter agent (KSM in the preceding diagram) converts Kubernetes objects to metrics that Prometheus can use. openshift-state-metrics agent The openshift-state-metrics exporter (OSM in the preceding diagram) expands upon kube-state-metrics by adding metrics for OpenShift Container Platform-specific resources. node-exporter agent The node-exporter agent (NE in the preceding diagram) collects metrics about every node in a cluster. The node-exporter agent is deployed on every node. Thanos Querier The Thanos Querier aggregates and optionally deduplicates core OpenShift Container Platform metrics and metrics for user-defined projects under a single, multi-tenant interface. Grafana The Grafana analytics platform provides dashboards for analyzing and visualizing the metrics. The Grafana instance that is provided with the monitoring stack, along with its dashboards, is read-only. Telemeter Client The Telemeter Client sends a subsection of the data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.2. Default monitoring targets In addition to the components of the stack itself, the default monitoring stack monitors: CoreDNS Elasticsearch (if Logging is installed) etcd Fluentd (if Logging is installed) HAProxy Image registry Kubelets Kubernetes apiserver Kubernetes controller manager Kubernetes scheduler Metering (if Metering is installed) OpenShift apiserver OpenShift controller manager Operator Lifecycle Manager (OLM) Note Each OpenShift Container Platform component is responsible for its monitoring configuration. For problems with the monitoring of an OpenShift Container Platform component, open a Jira issue against that component, not against the general monitoring component. Other OpenShift Container Platform framework components might be exposing metrics as well. For details, see their respective documentation. 1.2.3. Components for monitoring user-defined projects OpenShift Container Platform 4.7 includes an optional enhancement to the monitoring stack that enables you to monitor services and pods in user-defined projects. This feature includes the following components: Table 1.2. Components for monitoring user-defined projects Component Description Prometheus Operator The Prometheus Operator (PO) in the openshift-user-workload-monitoring project creates, configures, and manages Prometheus and Thanos Ruler instances in the same project. Prometheus Prometheus is the monitoring system through which monitoring is provided for user-defined projects. Prometheus sends alerts to Alertmanager for processing. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform 4.7, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Note The components in the preceding table are deployed after monitoring is enabled for user-defined projects. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.4. Monitoring targets for user-defined projects When monitoring is enabled for user-defined projects, you can monitor: Metrics provided through service endpoints in user-defined projects. Pods running in user-defined projects. 1.3. Additional resources About remote health monitoring Granting users permission to monitor user-defined projects 1.4. steps Configuring the monitoring stack | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/monitoring-overview |
Installing on IBM Cloud Bare Metal (Classic) | Installing on IBM Cloud Bare Metal (Classic) OpenShift Container Platform 4.12 Installing OpenShift Container Platform on IBM Cloud Bare Metal (Classic) Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_ibm_cloud_bare_metal_classic/index |
Chapter 13. Appendix: System configuration | Chapter 13. Appendix: System configuration 13.1. Transient runtime reconfiguration You can perform a dynamic reconfiguration in the base image configuration. For example, you can run the firewall-cmd --permanent command to achieve persistent changes across a reboot. Warning The /etc directory is persistent by default. If you perform changes made by using tools, for example firewall-cmd --permanent , the contents of the /etc on the system can differ from the one described in the container image. In the default configuration, first make the changes in the base image, then queue the changes without restarting running systems, and then simultaneously write to apply the changes to existing systems only in memory. You can configure the /etc directory to be transient by using bind mounts. In this case, the etc directory is a part of the machine's local root filesystem. For example, if you inject static IP addresses by using Anaconda kickstarts, they persist across upgrades. A 3-way merge is applied across upgrades and each "deployment" has its own copy of /etc . The /run directory The /run directory is an API filesystem that is defined to be deleted when the system is restarted. Use the /run directory for transient files. Dynamic reconfiguration models In the Pull model, you can include code directly embedded in your base image or a privileged container that contacts the remote network server for configuration, and subsequently launch additional container images, by using the Podman API. In the Push model, some workloads are implemented by tooling such as Ansible. systemd You can use systemd units for dynamic transient reconfiguration by writing to /run/systemd directory. For example, the systemctl edit --runtime myservice.service dynamically changes the configuration of the myservice.service unit, without persisting the changes. NetworkManager Use a /run/NetworkManager/conf.d directory for applying temporary network configuration. Use the nmcli connection modify --temporary command to write changes only in memory. Without the --temporary option, the command writes persistent changes. podman Use the podman run --rm command to automatically remove the container when it exits. Without the --rm option, the podman run command creates a container that persists across system reboots. 13.2. Using dnf The rhel9/rhel-bootc container image includes dnf . There are several use cases: Using dnf as a part of a container build You can use the RUN dnf install directive in the Containerfile. Using dnf at runtime Warning The functionality depends on the dnf version. You might get an error: error: can't create transaction lock on /usr/share/rpm/.rpm.lock (Read-only file system) . You can use the bootc-usr-overlay command to create a writable overlay filesystem for /usr directory. The dnf install writes to this overlay. You can use this feature for installing debugging tools. Note that changes will be lost on reboot. Configuring storage The supported storage technologies are the following: xfs / ext4 Logical volume management (LVM) Linux Unified Key Setup (LUKS) You can add other storage packages to the host system. Storage with bootc-image-builder You can use the bootc-image-builder tool to create a disk image. The available configuration for partitioning and layout is relatively fixed. The default filesystem type is derived from the container image's bootc install configuration. Storage with bootc install You can use the bootc install to-disk command for flat storage configurations and bootc install to-filesytem command for more advanced installations. For more information see Advanced installation with to-filesystem . 13.3. Setting a hostname To set a custom hostname for your system, modify the /etc/hostname file. You can set the hostname by using Anaconda, or with a privileged container. Once you boot a system, you can verify the hostname by using the hostnamectl command. 13.4. Proxied Internet Access If you are deploying to an environment requiring internet access by using a proxy, you need to configure services so that they can access resources as intended. This is done by defining a single file with required environment variables in your configuration, and to reference this by using systemd drop-in unit files for all such services. Defining common proxy environment variables This common file has to be subsequently referenced explicitly by each service that requires internet access. Defining drop-in units for core services The bootc and podman tools commonly need proxy configuration. At the current time, bootc does not always run as a systemd unit. Defining proxy use for podman systemd units Using the Podman systemd configuration, similarly add EnvironmentFile=/etc/example-proxy.env . You can set the configuration for proxy and environment settings of podman and containers in the /etc/containers/containers.conf configuration file as a root user or in the USDHOME/.config/containers/containers.conf configuration file as a non-root user. | [
"/etc/example-proxy.env https_proxy=\"http://example.com:8080\" all_proxy=\"http://example.com:8080\" http_proxy=\"http://example.com:8080\" HTTP_PROXY=\"http://example.com:8080\" HTTPS_PROXY=\"http://example.com:8080\" no_proxy=\"*.example.com,127.0.0.1,0.0.0.0,localhost\"",
"/usr/lib/systemd/system/bootc-fetch-apply-updates.service.d/99-proxy.conf [Service] EnvironmentFile=/etc/example-proxy.env"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/system-configuration |
Chapter 10. Configuring an OVS-DPDK deployment | Chapter 10. Configuring an OVS-DPDK deployment This section describes how to deploy, use, and troubleshoot Open vSwitch Data Plane Development Kit (OVS-DPDK) for a Red Hat OpenStack Platform (RHOSP) environment. RHOSP operates in OVS client mode for OVS-DPDK deployments. The following figure shows an OVS-DPDK topology with two bonded ports for the control plane and data plane: Figure 10.1. Sample OVS-DPDK topology Important This section includes examples that you must modify for your topology and use case. For more information, see Hardware requirements for NFV . Prerequisites A RHOSP undercloud. You must install and configure the undercloud before you can deploy the overcloud. For more information, see Installing and managing Red Hat OpenStack Platform with director . Note RHOSP director modifies OVS-DPDK configuration files through the key-value pairs that you specify in templates and custom environment files. You must not modify the OVS-DPDK files directly. Access to the undercloud host and credentials for the stack user. Procedure Use Red Hat OpenStack Platform (RHOSP) director to install and configure OVS-DPDK in a RHOSP environment. The high-level steps are: Review the known limitations for OVS-DPDK . Generate roles and image files . Create an environment file for your OVS-DPDK customizations . Configure a firewall for security groups . Create a bare metal nodes definition file . Create a NIC configuration template . Set the MTU value for OVS-DPDK interfaces . Set multiqueue for OVS-DPDK interfaces . Configure DPDK parameters for node provisioning . Provision overcloud networks and VIPs. For more information, see: Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Provision bare metal nodes. Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide Deploy an OVS-DPDK overcloud . Additional resources Section 10.11, "Creating a flavor and deploying an instance for OVS-DPDK" Section 10.12, "Troubleshooting the OVS-DPDK configuration" 10.1. Known limitations for OVS-DPDK Observe the following limitations when configuring Red Hat OpenStack Platform in a Open vSwitch Data Plane Development Kit (OVS-DPDK) environment: Use Linux bonds for non-DPDK traffic, and control plane networks, such as Internal, Management, Storage, Storage Management, and Tenant. Ensure that both the PCI devices used in the bond are on the same NUMA node for optimum performance. Neutron Linux bridge configuration is not supported by Red Hat. You require huge pages for every instance running on the hosts with OVS-DPDK. If huge pages are not present in the guest, the interface appears but does not function. With OVS-DPDK, there is a performance degradation of services that use tap devices, such as Distributed Virtual Routing (DVR). The resulting performance is not suitable for a production environment. When using OVS-DPDK, all bridges on the same Compute node must be of type ovs_user_bridge . The director may accept the configuration, but Red Hat OpenStack Platform does not support mixing ovs_bridge and ovs_user_bridge on the same node. steps Proceed to Section 10.2, "Generating roles and image files" . 10.2. Generating roles and image files Red Hat OpenStack Platform (RHOSP) director uses roles to assign services to nodes. When deploying RHOSP in an OVS-DPDK environment, ComputeOvsDpdk is a custom role provided with your RHOSP installation that includes the ComputeNeutronOvsDpdk service, in addition to the default compute services. The undercloud installation requires an environment file to determine where to obtain container images and how to store them. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file, for example, roles_data_compute_ovsdpdk.yaml , that includes the Controller and ComputeOvsDpdk roles: Note If you are using multiple technologies in your RHOSP environment, OVS-DPDK, SR-IOV, and OVS hardware offload, you generate just one roles data file to include all the roles: Optional: You can configure OVS-DPDK to enter sleep mode when no packets are being forwarded, by using the TuneD profile, cpu-partitioning-powersave . To configure cpu-partitioning-powersave , replace the default TuneD profile with the power saving TuneD profile, cpu-partitioning-powersave , in your generated roles data file: Example In this generated roles data file, /home/stack/templates/roles_data_compute_ovsdpdk.yaml , the default value of TunedProfileName is replaced with cpu-partitioning-powersave : To generate an images file, you run the openstack tripleo container image prepare command. The following inputs are needed: The roles data file that you generated in an earlier step, for example, roles_data_compute_ovsdpdk.yaml . The DPDK environment file appropriate for your Networking service mechanism driver: neutron-ovn-dpdk.yaml file for ML2/OVN environments. neutron-ovs-dpdk.yaml file for ML2/OVS environments. Example In this example, the overcloud_images.yaml file is being generated for an ML2/OVN environment: Note the path and file name of the roles data file and the images file that you have created. You use these files later when you deploy your overcloud. steps Proceed to Section 10.3, "Creating an environment file for your OVS-DPDK customizations" . Additional resources Saving power in OVS-DPDK deployments Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director Preparing container images in Installing and managing Red Hat OpenStack Platform with director 10.3. Creating an environment file for your OVS-DPDK customizations You can use particular Red Hat OpenStack Platform configuration values in a custom environment YAML file to configure your OVS-DPDK deployment. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create a custom environment YAML file, for example, ovs-dpdk-overrides.yaml . In the custom environment file, ensure that AggregateInstanceExtraSpecsFilter is in the list of filters for the NovaSchedulerEnabledFilters parameter that the Compute service (nova) uses to filter a node: parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - AggregateInstanceExtraSpecsFilter Add role-specific parameters for the OVS-DPDK Compute nodes to the custom environment file. Example parameter_defaults: ComputeOvsDpdkParameters: NeutronBridgeMappings: "dpdk:br-dpdk" KernelArgs: "default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39" TunedProfileName: "cpu-partitioning" IsolCpusList: "2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39" NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: "4096,4096" OvsDpdkMemoryChannels: "4" OvsDpdkCoreList: "0,20,1,21" NovaComputeCpuDedicatedSet: "4,6,8,10,12,14,16,18,24,26,28,30,32,34,36,38,5,7,9,11,13,15,17,19,27,29,31,33,35,37,39" NovaComputeCpuSharedSet: "0,20,1,21" OvsPmdCoreList: "2,22,3,23" If you need to override any of the configuration defaults in those files, add your overrides to the custom environment file that you created in step 3. RHOSP director uses the following files to configure OVS-DPDK: ML2/OVN deployments /usr/share/openstack-tripleo-heat-templates/environment/services/neutron-ovn-dpdk.yaml ML2/OVS deployments /usr/share/openstack-tripleo-heat-templates/environment/services/neutron-ovs-dpdk.yaml Note the path and file name of the custom environment file that you have created. You use this file later when you deploy your overcloud. steps Proceed to Section 10.4, "Configuring a firewall for security groups" . 10.4. Configuring a firewall for security groups Data plane interfaces require high performance in a stateful firewall. To protect these interfaces, consider deploying a telco-grade firewall as a virtual network function (VNF) in your Red Hat OpenStack Platform (RHOSP) OVS-DPDK environment. To configure control plane interfaces in an ML2/OVS deployment, set the NeutronOVSFirewallDriver parameter to openvswitch in your custom environment file under parameter_defaults . In an OVN deployment, you can implement security groups with Access Control Lists (ACL). You cannot use the OVS firewall driver with hardware offload because the connection tracking properties of the flows are unsupported in the offload path. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open the custom environment YAML file that you created in Section 10.3, "Creating an environment file for your OVS-DPDK customizations" , or create a new one. Under parameter_defaults , add the following key-value pair: parameter_defaults: ... NeutronOVSFirewallDriver: openvswitch If you created a new custom environment file, note its path and file name. You use this file later when you deploy your overcloud. After you deploy the overcloud, run the openstack port set command to disable the OVS firewall driver for data plane interfaces: steps Proceed to Section 10.5, "Creating a bare metal nodes definition file" . Additional resources Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director Tested NICs for NFV 10.5. Creating a bare metal nodes definition file Using Red Hat OpenStack Platform (RHOSP) director you provision your bare metal nodes for your OVS-DPDK environment by using a definition file. In the bare metal nodes definition file, define the quantity and attributes of the bare metal nodes that you want to deploy and assign overcloud roles to these nodes. Also define the network layout of the nodes. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Create a bare metal nodes definition file, such as overcloud-baremetal-deploy.yaml , as instructed in Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. In the overcloud-baremetal-deploy.yaml file add a declaration to the Ansible playbook, cli-overcloud-node-kernelargs.yaml . The playbook contains kernel arguments to use when you are provisioning bare metal nodes. - name: ComputeOvsDpdk ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml ... If you want to set any extra Ansible variables when running the playbook, use the extra_vars property to set them. For more information, see Bare-metal node provisioning attributes in the Installing and managing Red Hat OpenStack Platform with director guide. Note The variables that you add to extra_vars should be the same role-specific parameters for the OVS-DPDK Compute nodes that you added to the custom environment file earlier in Create an environment file for your OVS-DPDK customizations . Example - name: ComputeOvsDpdk ... ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_isolated_cores: '2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: pmd: '2,22,3,23' memory_channels: '4' socket_mem: '4096,4096' pmd_auto_lb: true pmd_load_threshold: "70" pmd_improvement_threshold: "25" pmd_rebal_interval: "2" If you are using NIC partitioning on NVIDIA Mellanox cards, to avoid VF connectivity issues, set the Ansible variable, dpdk_extra: '-a 0000:00:00.0' , which causes the allow list of PCI addresses to allow no addresses: Example Optional: You can configure OVS-DPDK to enter sleep mode when no packets are being forwarded, by using the TuneD profile, cpu-partitioning-powersave . To configure cpu-partitioning-powersave , add the following lines to your bare metal nodes definition file: ... tuned_profile: "cpu-partitioning-powersave" ... - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/cli-overcloud-tuned-maxpower-conf.yaml - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/overcloud-nm-config.yaml extra_vars: reboot_wait_timeout: 900 ... pmd_sleep_max: "50" ... Example Note the path and file name of the bare metal nodes definition file that you have created. You use this file later when you configure your NICs and as the input file for the overcloud node provision command when you provision your nodes. steps Proceed to Section 10.6, "Creating a NIC configuration template" . Additional resources Saving power in OVS-DPDK deployments Composable services and custom roles in Installing and managing Red Hat OpenStack Platform with director Tested NICs for NFV 10.6. Creating a NIC configuration template Define your NIC configuration templates by modifying copies of the sample Jinja2 templates that ship with Red Hat OpenStack Platform (RHOSP). Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Copy a sample network configuration template. Copy a NIC configuration Jinja2 template from the examples in the /usr/share/ansible/roles/tripleo_network_config/templates/ directory. Choose the one that most closely matches your NIC requirements. Modify it as needed. In your NIC configuration template, for example, single_nic_vlans.j2 , add your DPDK interfaces. Note In the sample NIC configuration template, single_nic_vlans.j2 , the nodes only use one single network interface as a trunk with VLANs. The native VLAN, the untagged traffic, is the control plane, and each VLAN corresponds to one of the RHOSP networks: internal API, storage, and so on. Example ... - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: - set Interface dpdk0 options:n_rxq_desc=4096 - set Interface dpdk0 options:n_txq_desc=4096 - set Interface dpdk1 options:n_rxq_desc=4096 - set Interface dpdk1 options:n_txq_desc=4096 members: - type: ovs_dpdk_port name: dpdk0 driver: vfio-pci members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 driver: vfio-pci members: - type: interface name: nic6 ... Add the custom network configuration template, for example, single_nic_vlans.j2 , to the bare metal nodes definition file, for example, overcloud-baremetal-deploy.yaml that you created in Section 10.5, "Creating a bare metal nodes definition file" . Example Optional: You can configure OVS-DPDK to enter sleep mode when no packets are being forwarded, by using the TuneD profile, cpu-partitioning-powersave . To configure cpu-partitioning-powersave , ensure that you have set the queue size in your NIC configuration template. Example Note the path and file name of the NIC configuration template that you have created. You use this file later when you deploy your overcloud. steps Proceed to Section 10.7, "Setting the MTU value for OVS-DPDK interfaces" . Additional resources Saving power in OVS-DPDK deployments 10.7. Setting the MTU value for OVS-DPDK interfaces Red Hat OpenStack Platform (RHOSP) supports jumbo frames for OVS-DPDK. To set the maximum transmission unit (MTU) value for jumbo frames you must: Set the global MTU value for networking in a custom environment file. Set the physical DPDK port MTU value in your NIC configuration template. This value is also used by the vhost user interface. Set the MTU value within any guest instances on the Compute node to ensure that you have a comparable MTU value from end to end in your configuration. You do not need any special configuration for the physical NIC because the NIC is controlled by the DPDK PMD, and has the same MTU value set by your NIC configuration template. You cannot set an MTU value larger than the maximum value supported by the physical NIC. Note VXLAN packets include an extra 50 bytes in the header. Calculate your MTU requirements based on these additional header bytes. For example, an MTU value of 9000 means the VXLAN tunnel MTU value is 8950 to account for these extra bytes. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open the custom environment YAML file that you created in Section 10.3, "Creating an environment file for your OVS-DPDK customizations" , or create a new one. Under parameter_defaults set the NeutronGlobalPhysnetMtu parameter. Example In this example, NeutronGlobalPhysnetMtu is set to 9000 : Note Ensure that the OvsDpdkSocketMemory value in the network-environment.yaml file is large enough to support jumbo frames. For more information, see Memory parameters . Open your NIC configuration template, for example, single_nic_vlans.j2 , that you created in Section 10.6, "Creating a NIC configuration template" . Set the MTU value on the bridge to the Compute node. Set the MTU values for an OVS-DPDK bond: Note the paths and file names of your NIC configuration template and your custom environment file. You use these files later when you deploy your overcloud. steps Proceed to Section 10.8, "Setting multiqueue for OVS-DPDK interfaces" . 10.8. Setting multiqueue for OVS-DPDK interfaces You can configure your OVS-DPDK deployment to automatically load balance queues to non-isolated Poll Mode Drivers (PMD)s, based on load, and queue usage. Open vSwitch can trigger automatic queue rebalancing in the following scenarios: You enabled cycle-based assignment of RX queues by setting the value of pmd-auto-lb to true . Two or more non-isolated PMDs are present. More than one queue polls for at least one non-isolated PMD. The load value for aggregated PMDs exceeds 95% for a duration of one minute. Important Multiqueue is experimental, and only supported with manual queue pinning. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open the NIC configuration template, such as single_nic_vlans.j2 , that you created in Section 10.6, "Creating a NIC configuration template" . Set the number of queues for interfaces in OVS-DPDK on the Compute node: Note the path and file name of the NIC configuration template. You use this file later when you deploy your overcloud. steps Proceed to Section 10.9, "Configuring DPDK parameters for node provisioning" . 10.9. Configuring DPDK parameters for node provisioning You can configure your Red Hat OpenStack Platform (RHOSP) OVS-DPDK environment to automatically load balance the Open vSwitch (OVS) Poll Mode Driver (PMD) threads. You do this by editing parameters that RHOSP director uses during bare metal node provisioning and during overcloud deployment. The OVS PMD threads perform the following tasks for user space context switching: Continuous polling of input ports for packets. Classifying received packets. Executing actions on the packets after classification. Prerequisites Access to the undercloud host and credentials for the stack user. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Set parameters in the bare metal nodes definition file that you created in Section 10.5, "Creating a bare metal nodes definition file" , for example overcloud-baremetal-deploy .yaml: pmd_auto_lb Set to true to enable PMD automatic load balancing. pmd_load_threshold Percentage of processing cycles that one of the PMD threads must use consistently before triggering the PMD load balance. Integer, range 0-100. pmd_improvement_threshold Minimum percentage of evaluated improvement across the non-isolated PMD threads that triggers a PMD auto load balance. Integer, range 0-100. To calculate the estimated improvement, a dry run of the reassignment is done and the estimated load variance is compared with the current variance. The default is 25%. pmd_rebal_interval Minimum time in minutes between two consecutive PMD Auto Load Balance operations. Range 0-20,000 minutes. Configure this value to prevent triggering frequent reassignments where traffic patterns are changeable. For example, you might trigger a reassignment once every 10 minutes or once every few hours. Example ansible_playbooks: ... - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: ... pmd_auto_lb: true pmd_load_threshold: "70" pmd_improvement_threshold: "25" pmd_rebal_interval: "2" Open the custom environment YAML file that you created in Section 10.3, "Creating an environment file for your OVS-DPDK customizations" , or create a new one. In the custom environment file, add the same bare metal node pre-provisioning values that you set in step 3. Use these equivalent parameters: OvsPmdAutoLb Heat equivalent of pmd_auto_lb . Set to true to enable PMD automatic load balancing. OvsPmdLoadThreshold Heat equivalent of pmd_load_threshold . Percentage of processing cycles that one of the PMD threads must use consistently before triggering the PMD load balance. Integer, range 0-100. OvsPmdImprovementThreshold Heat equivalent of pmd_improvement_threshold . Minimum percentage of evaluated improvement across the non-isolated PMD threads that triggers a PMD auto load balance. Integer, range 0-100. To calculate the estimated improvement, a dry run of the reassignment is done and the estimated load variance is compared with the current variance. The default is 25%. OvsPmdRebalInterval Heat equivalent of pmd_rebal_interval . Minimum time in minutes between two consecutive PMD Auto Load Balance operations. Range 0-20,000 minutes. Configure this value to prevent triggering frequent reassignments where traffic patterns are changeable. For example, you might trigger a reassignment once every 10 minutes or once every few hours. Example parameter_merge_strategies: ComputeOvsDpdkSriovParameters:merge ... parameter_defaults: ComputeOvsDpdkSriovParameters: ... OvsPmdAutoLb: true OvsPmdLoadThreshold: 70 OvsPmdImprovementThreshold: 25 OvsPmdRebalInterval: 2 Note the paths and file names of your NIC configuration template and your custom environment file. You use these files later when you provision your bare metal nodes and deploy your overcloud. steps Provision your networks and VIPs. Provision your bare metal nodes. Ensure that you use your bare metal nodes definition file, such as overcloud-baremetal-deploy.yaml , as the input for running the provision command. Proceed to Section 10.10, "Deploying an OVS-DPDK overcloud" . Additional resources Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. 10.10. Deploying an OVS-DPDK overcloud The last step in deploying your Red Hat OpenStack Platform (RHOSP) overcloud in an OVS-DPDK environment is to run the openstack overcloud deploy command. Inputs to the command include all of the various overcloud templates and environment files that you constructed. Prerequisites Access to the undercloud host and credentials for the stack user. You have performed all of the steps listed in the earlier procedures in this section and have assembled all of the various heat templates and environment files to use as inputs for the overcloud deploy command. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Enter the openstack overcloud deploy command. It is important to list the inputs to the openstack overcloud deploy command in a particular order. The general rule is to specify the default heat template files first followed by your custom environment files and custom templates that contain custom configurations, such as overrides to the default properties. Add your inputs to the openstack overcloud deploy command in the following order: A custom network definition file that contains the specifications for your SR-IOV network on the overcloud, for example, network-data.yaml . For more information, see Network definition file configuration options in the Installing and managing Red Hat OpenStack Platform with director guide. A roles file that contains the Controller and ComputeOvsDpdk roles that RHOSP director uses to deploy your SR-IOV environment. Example: roles_data_compute_ovsdpdk.yaml For more information, see Section 10.2, "Generating roles and image files" . The output file from provisioning your overcloud networks. Example: overcloud-networks-deployed.yaml For more information, see Configuring and provisioning overcloud network definitions in the Installing and managing Red Hat OpenStack Platform with director guide. The output file from provisioning your overcloud VIPs. Example: overcloud-vip-deployed.yaml For more information, see Configuring and provisioning network VIPs for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. The output file from provisioning bare-metal nodes. Example: overcloud-baremetal-deployed.yaml For more information, see: Section 10.9, "Configuring DPDK parameters for node provisioning" . Provisioning bare metal nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. An images file that director uses to determine where to obtain container images and how to store them. Example: overcloud_images.yaml For more information, see Section 10.2, "Generating roles and image files" . An environment file for the Networking service (neutron) mechanism driver and router scheme that your environment uses: ML2/OVN Distributed virtual routing (DVR): neutron-ovn-dvr-ha.yaml Centralized virtual routing: neutron-ovn-ha.yaml ML2/OVS Distributed virtual routing (DVR): neutron-ovs-dvr.yaml Centralized virtual routing: neutron-ovs.yaml An environment file for OVS-DPDK, depending on your mechanism driver: ML2/OVN neutron-ovn-dpdk.yaml ML2/OVS neutron-ovs-dpdk.yaml Note If you also have an SR-IOV environment, and want to locate SR-IOV and OVS-DPDK instances on the same node, include the following environment files in your deployment script: ML2/OVN neutron-ovn-sriov.yaml ML2/OVS neutron-sriov.yaml One or more custom environment files that contain your configuration for: overrides of default configuration values for the OVS-DPDK environment. firewall as a virtual network function (VNF). maximum transmission unit (MTU) value for jumbo frames. Example: ovs-dpdk-overrides.yaml For more information, see: Section 10.3, "Creating an environment file for your OVS-DPDK customizations" . Section 10.4, "Configuring a firewall for security groups" . Section 10.7, "Setting the MTU value for OVS-DPDK interfaces" . Example This excerpt from a sample openstack overcloud deploy command demonstrates the proper ordering of the command's inputs for an OVS-DPDK, ML2/OVN environment that uses DVR: Run the openstack overcloud deploy command. When the overcloud creation is finished, the RHOSP director provides details to help you access your overcloud. Verification Perform the steps in Validating your overcloud deployment in the Installing and managing Red Hat OpenStack Platform with director guide. steps If you have configured a firewall, run the openstack port set command to disable the OVS firewall driver for data plane interfaces: Additional resources Creating your overcloud in the Installing and managing Red Hat OpenStack Platform with director guide overcloud deploy in the Command line interface reference 10.11. Creating a flavor and deploying an instance for OVS-DPDK After you configure OVS-DPDK for your Red Hat OpenStack Platform deployment with NFV, you can create a flavor, and deploy an instance using the following steps: Create an aggregate group, and add relevant hosts for OVS-DPDK. Define metadata, for example dpdk=true , that matches defined flavor metadata. Note Pinned CPU instances can be located on the same Compute node as unpinned instances. For more information, see Configuring CPU pinning on Compute nodes in Configuring the Compute service for instance creation . Create a flavor. Set flavor properties. Note that the defined metadata, dpdk=true , matches the defined metadata in the DPDK aggregate. For details about the emulator threads policy for performance improvements, see Configuring emulator threads in Configuring the Compute service for instance creation . Create the network. Optional: If you use multiqueue with OVS-DPDK, set the hw_vif_multiqueue_enabled property on the image that you want to use to create a instance: Deploy an instance. 10.12. Troubleshooting the OVS-DPDK configuration This section describes the steps to troubleshoot the OVS-DPDK configuration. Review the bridge configuration, and confirm that the bridge has datapath_type=netdev . Optionally, you can view logs for errors, such as if the container fails to start. Confirm that the Poll Mode Driver CPU mask of the ovs-dpdk is pinned to the CPUs. In case of hyper threading, use sibling CPUs. For example, to check the sibling of CPU4 , run the following command: The sibling of CPU4 is CPU20 , therefore proceed with the following command: Display the status: | [
"source ~/stackrc",
"openstack overcloud roles generate -o /home/stack/templates/roles_data_compute_ovsdpdk.yaml Controller ComputeOvsDpdk",
"openstack overcloud roles generate -o /home/stack/templates/ roles_data.yaml Controller ComputeOvsDpdk ComputeOvsDpdkSriov Compute:ComputeOvsHwOffload",
"TunedProfileName: \"cpu-partitioning-powersave\"",
"sed -i 's/TunedProfileName:.*USD/TunedProfileName: \"cpu-partitioning-powersave\"/' /home/stack/templates/roles_data_compute_ovsdpdk.yaml",
"sudo openstack tripleo container image prepare --roles-file ~/templates/roles_data_compute_ovsdpdk.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dpdk.yaml -e ~/containers-prepare-parameter.yaml --output-env-file=/home/stack/templates/overcloud_images.yaml",
"source ~/stackrc",
"parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - AggregateInstanceExtraSpecsFilter",
"parameter_defaults: ComputeOvsDpdkParameters: NeutronBridgeMappings: \"dpdk:br-dpdk\" KernelArgs: \"default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39\" TunedProfileName: \"cpu-partitioning\" IsolCpusList: \"2,4,6,8,10,12,14,16,18,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,23,25,27,29,31,33,35,37,39\" NovaReservedHostMemory: 4096 OvsDpdkSocketMemory: \"4096,4096\" OvsDpdkMemoryChannels: \"4\" OvsDpdkCoreList: \"0,20,1,21\" NovaComputeCpuDedicatedSet: \"4,6,8,10,12,14,16,18,24,26,28,30,32,34,36,38,5,7,9,11,13,15,17,19,27,29,31,33,35,37,39\" NovaComputeCpuSharedSet: \"0,20,1,21\" OvsPmdCoreList: \"2,22,3,23\"",
"source ~/stackrc",
"parameter_defaults: NeutronOVSFirewallDriver: openvswitch",
"openstack port set --no-security-group --disable-port-security USD{PORT}",
"source ~/stackrc",
"- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml",
"- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: 'default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_isolated_cores: '2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39' tuned_profile: 'cpu-partitioning' reboot_wait_timeout: 1800 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: pmd: '2,22,3,23' memory_channels: '4' socket_mem: '4096,4096' pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\"",
"- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: reboot_wait_timeout: 600 kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-11,13-23 tuned_profile: cpu-partitioning tuned_isolated_cores: 1-11,13-23 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: memory_channels: 4 lcore: 0,12 pmd: 1,13,2,14,3,15 socket_mem: 4096 dpdk_extra: -a 0000:00:00.0 disable_emc: false enable_tso: false revalidator: ' handler: ' pmd_auto_lb: false pmd_load_threshold: ' pmd_improvement_threshold: ' pmd_rebal_interval: '' nova_postcopy: true",
"tuned_profile: \"cpu-partitioning-powersave\" - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/cli-overcloud-tuned-maxpower-conf.yaml - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/overcloud-nm-config.yaml extra_vars: reboot_wait_timeout: 900 pmd_sleep_max: \"50\"",
"- name: ComputeOvsDpdk ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: kernel_args: default_hugepagesz=1GB hugepagesz=1GB hugepages=64 iommu=pt intel_iommu=on isolcpus=2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 tuned_isolated_cores: 2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39 tuned_profile: cpu-partitioning reboot_wait_timeout: 1800 - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/cli-overcloud-tuned-maxpower-conf.yaml - playbook: /home/stack/ospd-17.1-geneve-ovn-dpdk-sriov-ctlplane-dataplane-bonding-hybrid/playbooks/overcloud-nm-config.yaml extra_vars: reboot_wait_timeout: 900 - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: pmd: 2,22,3,23 memory_channels: 4 socket_mem: 4096,4096 pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\" pmd_sleep_max: \"50\"",
"source ~/stackrc",
"- type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: - set Interface dpdk0 options:n_rxq_desc=4096 - set Interface dpdk0 options:n_txq_desc=4096 - set Interface dpdk1 options:n_rxq_desc=4096 - set Interface dpdk1 options:n_txq_desc=4096 members: - type: ovs_dpdk_port name: dpdk0 driver: vfio-pci members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 driver: vfio-pci members: - type: interface name: nic6",
"- name: ComputeOvsDpdk count: 2 hostname_format: compute-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/templates/single_nic_vlans.j2",
"- type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 ovs_extra: - set Interface dpdk0 options:n_rxq_desc=4096 - set Interface dpdk0 options:n_txq_desc=4096 - set Interface dpdk1 options:n_rxq_desc=4096 - set Interface dpdk1 options:n_txq_desc=4096 members: - type: ovs_dpdk_port name: dpdk0 driver: vfio-pci members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk1 driver: vfio-pci members: - type: interface name: nic6",
"source ~/stackrc",
"parameter_defaults: # MTU global configuration NeutronGlobalPhysnetMtu: 9000",
"- type: ovs_bridge name: br-link0 use_dhcp: false members: - type: interface name: nic3 mtu: 9000",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5",
"source ~/stackrc",
"- type: ovs_user_bridge name: br-link0 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 members: - type: ovs_dpdk_port name: dpdk0 mtu: 9000 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 mtu: 9000 members: - type: interface name: nic5",
"source ~/stackrc",
"ansible_playbooks: ... - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: ... pmd_auto_lb: true pmd_load_threshold: \"70\" pmd_improvement_threshold: \"25\" pmd_rebal_interval: \"2\"",
"parameter_merge_strategies: ComputeOvsDpdkSriovParameters:merge ... parameter_defaults: ComputeOvsDpdkSriovParameters: ... OvsPmdAutoLb: true OvsPmdLoadThreshold: 70 OvsPmdImprovementThreshold: 25 OvsPmdRebalInterval: 2",
"source ~/stackrc",
"openstack overcloud deploy --log-file overcloud_deployment.log --templates /usr/share/openstack-tripleo-heat-templates/ --stack overcloud -n /home/stack/templates/network_data.yaml -r /home/stack/templates/roles_data_compute_ovsdpdk.yaml -e /home/stack/templates/overcloud-networks-deployed.yaml -e /home/stack/templates/overcloud-vip-deployed.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/overcloud-images.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dvr-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/ neutron-ovn-dpdk.yaml -e /home/stack/templates/ovs-dpdk-overrides.yaml",
"openstack port set --no-security-group --disable-port-security USD{PORT}",
"openstack aggregate create dpdk_group # openstack aggregate add host dpdk_group [compute-host] # openstack aggregate set --property dpdk=true dpdk_group",
"openstack flavor create <flavor> --ram <MB> --disk <GB> --vcpus <#>",
"openstack flavor set <flavor> --property dpdk=true --property hw:cpu_policy=dedicated --property hw:mem_page_size=1GB --property hw:emulator_threads_policy=isolate",
"openstack network create net1 --provider-physical-network tenant --provider-network-type vlan --provider-segment <VLAN-ID> openstack subnet create subnet1 --network net1 --subnet-range 192.0.2.0/24 --dhcp",
"openstack image set --property hw_vif_multiqueue_enabled=true <image>",
"openstack server create --flavor <flavor> --image <glance image> --nic net-id=<network ID> <server_name>",
"ovs-vsctl list bridge br0 _uuid : bdce0825-e263-4d15-b256-f01222df96f3 auto_attach : [] controller : [] datapath_id : \"00002608cebd154d\" datapath_type : netdev datapath_version : \"<built-in>\" external_ids : {} fail_mode : [] flood_vlans : [] flow_tables : {} ipfix : [] mcast_snooping_enable: false mirrors : [] name : \"br0\" netflow : [] other_config : {} ports : [52725b91-de7f-41e7-bb49-3b7e50354138] protocols : [] rstp_enable : false rstp_status : {} sflow : [] status : {} stp_enable : false",
"less /var/log/containers/neutron/openvswitch-agent.log",
"cat /sys/devices/system/cpu/cpu4/topology/thread_siblings_list 4,20",
"ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x100010",
"tuna -t ovs-vswitchd -CP thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 3161 OTHER 0 6 765023 614 ovs-vswitchd 3219 OTHER 0 6 1 0 handler24 3220 OTHER 0 6 1 0 handler21 3221 OTHER 0 6 1 0 handler22 3222 OTHER 0 6 1 0 handler23 3223 OTHER 0 6 1 0 handler25 3224 OTHER 0 6 1 0 handler26 3225 OTHER 0 6 1 0 handler27 3226 OTHER 0 6 1 0 handler28 3227 OTHER 0 6 2 0 handler31 3228 OTHER 0 6 2 4 handler30 3229 OTHER 0 6 2 5 handler32 3230 OTHER 0 6 953538 431 revalidator29 3231 OTHER 0 6 1424258 976 revalidator33 3232 OTHER 0 6 1424693 836 revalidator34 3233 OTHER 0 6 951678 503 revalidator36 3234 OTHER 0 6 1425128 498 revalidator35 *3235 OTHER 0 4 151123 51 pmd37* *3236 OTHER 0 20 298967 48 pmd38* 3164 OTHER 0 6 47575 0 dpdk_watchdog3 3165 OTHER 0 6 237634 0 vhost_thread1 3166 OTHER 0 6 3665 0 urcu2"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_network_functions_virtualization/config-dpdk-deploy_rhosp-nfv |
Installation overview | Installation overview OpenShift Container Platform 4.13 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_overview/index |
A.2. Download API Documentation | A.2. Download API Documentation Javadocs for Red Hat JBoss Data Virtualization can be found on the Red Hat Customer Portal . Procedure A.1. Download API Documentation Open a web browser and navigate to https://access.redhat.com/jbossnetwork . From the Software Downloads page, when prompted for a Product , select Data Virtualization . This will present a table of files to download for the latest version of the product. Change the Version to the current release if required. Look for Red Hat JBoss Data Virtualization VERSION Javadocs in the table and select Download . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/download_api_documentation |
19.6. Additional Resources | 19.6. Additional Resources For more information, refer to the following resources. 19.6.1. Installed Documentation The man pages for ntsysv , chkconfig , xinetd , and xinetd.conf . man 5 hosts_access - The man page for the format of host access control files (in section 5 of the man pages). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services-additional_resources |
Chapter 9. Installing a private cluster on Azure | Chapter 9. Installing a private cluster on Azure In OpenShift Container Platform version 4.12, you can install a private cluster into an existing Azure Virtual Network (VNet) on Microsoft Azure. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.2.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 9.2.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 9.2.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. Private cluster with network address translation You can use Azure VNET network address translation (NAT) to provide outbound internet access for the subnets in your cluster. You can reference Create a NAT gateway using Azure CLI in the Azure documentation for configuration instructions. When using a VNet setup with Azure NAT and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with Azure Firewall You can use Azure Firewall to provide outbound routing for the VNet used to install the cluster. You can learn more about providing user-defined routing with Azure Firewall in the Azure documentation. When using a VNet setup with Azure Firewall and user-defined routing configured, you can create a private cluster with no public endpoints. Private cluster with a proxy configuration You can use a proxy with user-defined routing to allow egress to the internet. You must ensure that cluster Operators do not access Azure APIs using a proxy; Operators must have access to Azure APIs outside of the proxy. When using the default route table for subnets, with 0.0.0.0/0 populated automatically by Azure, all Azure API requests are routed over Azure's internal network even though the IP addresses are public. As long as the Network Security Group rules allow egress to Azure API endpoints, proxies with user-defined routing configured allow you to create private clusters with no public endpoints. Private cluster with no internet access You can install a private network that restricts all access to the internet, except the Azure API. This is accomplished by mirroring the release image registry locally. Your cluster must have access to the following: An OpenShift image registry mirror that allows for pulling container images Access to Azure APIs With these requirements available, you can use user-defined routing to create private clusters with no public endpoints. 9.3. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.12, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 9.3.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 9.3.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 9.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Additional resources About the OpenShift SDN network plugin 9.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 9.3.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 9.7.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 9.5. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.7.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 9.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 9.7.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 9.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 9.7.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 1 10 13 20 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.7.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 9.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.9. Finalizing user-managed encryption after installation If you installed OpenShift Container Platform using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. Procedure Obtain the identity of the cluster resource group used by the installer: If you specified an existing resource group in install-config.yaml , obtain its Azure identity by running the following command: USD az identity list --resource-group "<existing_resource_group>" If you did not specify a existing resource group in install-config.yaml , locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: USD az group list USD az identity list --resource-group "<installer_created_resource_group>" Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: USD az role assignment create --role "<privileged_role>" \ 1 --assignee "<resource_group_identity>" 2 1 Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 2 Specifies the identity of the cluster resource group. Obtain the id of the disk encryption set you created prior to installation by running the following command: USD az disk-encryption-set show -n <disk_encryption_set_name> \ 1 --resource-group <resource_group_name> 2 1 Specifies the name of the disk encryption set. 2 Specifies the resource group that contains the disk encryption set. The id is in the format of "/subscriptions/... /resourceGroups/... /providers/Microsoft.Compute/diskEncryptionSets/... " . Obtain the identity of the cluster service principal by running the following command: USD az identity show -g <cluster_resource_group> \ 1 -n <cluster_service_principal_name> \ 2 --query principalId --out tsv 1 Specifies the name of the cluster resource group created by the installation program. 2 Specifies the name of the cluster service principal created by the installation program. The identity is in the format of 12345678-1234-1234-1234-1234567890 . Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: USD az role assignment create --assignee <cluster_service_principal_id> \ 1 --role <privileged_role> \ 2 --scope <disk_encryption_set_id> \ 3 1 Specifies the ID of the cluster service principal obtained in the step. 2 Specifies the Azure role name. You can use the Contributor role or a custom role with the necessary permissions. 3 Specifies the ID of the disk encryption set. Create a storage class that uses the user-managed disk encryption set: Save the following storage class definition to a file, for example storage-class-definition.yaml : kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: "<disk_encryption_set_ID>" 1 resourceGroup: "<resource_group_name>" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer 1 Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example "/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx" . 2 Specifies the name of the resource group used by the installer. This is the same resource group from the first step. Create the storage class managed-premium from the file you created by running the following command: USD oc create -f storage-class-definition.yaml Select the managed-premium storage class when you create persistent volumes to use encrypted storage. 9.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"az identity list --resource-group \"<existing_resource_group>\"",
"az group list",
"az identity list --resource-group \"<installer_created_resource_group>\"",
"az role assignment create --role \"<privileged_role>\" \\ 1 --assignee \"<resource_group_identity>\" 2",
"az disk-encryption-set show -n <disk_encryption_set_name> \\ 1 --resource-group <resource_group_name> 2",
"az identity show -g <cluster_resource_group> \\ 1 -n <cluster_service_principal_name> \\ 2 --query principalId --out tsv",
"az role assignment create --assignee <cluster_service_principal_id> \\ 1 --role <privileged_role> \\ 2 --scope <disk_encryption_set_id> \\ 3",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: managed-premium provisioner: kubernetes.io/azure-disk parameters: skuname: Premium_LRS kind: Managed diskEncryptionSetID: \"<disk_encryption_set_ID>\" 1 resourceGroup: \"<resource_group_name>\" 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer",
"oc create -f storage-class-definition.yaml",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-private |
Chapter 2. General Updates | Chapter 2. General Updates In-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 An in-place upgrade offers a way of upgrading a system to a new major release of Red Hat Enterprise Linux by replacing the existing operating system. To perform an in-place upgrade, use the Preupgrade Assistant , a utility that checks the system for upgrade issues before running the actual upgrade, and that also provides additional scripts for the Red Hat Upgrade Tool . When you have solved all the problems reported by the Preupgrade Assistant , use the Red Hat Upgrade Tool to upgrade the system. For details regarding procedures and supported scenarios, see the Migration Planning Guide and the solution document dedicated to the upgrade . Note that the Preupgrade Assistant and the Red Hat Upgrade Tool are available in the Extras channel . preupgrade-assistant rebased to version 2.3.3 The preupgrade-assistant packages have been upgraded to version 2.3.3, which provides a number of bug fixes, enhancements, and other changes over the version. Notably: A new preupg-diff tool has been added, which compares multiple Preupgrade Assistant XML reports: one new with unidentified problems and other reports with already analyzed problems. The tool helps to find issues that emerged in the new report by filtering out results that are the same in the new report and in at least one of the analyzed XML files. The output of the trimmed report is available in the XML and HTML format. Two new return codes have been added: 29 for internal error , and 30 for user abort . The meaning of the return code 22 has been changed to invalid CLI option . The STDOUT and STDERR output in the assessment report of the Preupgrade Assistant have been separated into two fields: Additional output for STDOUT and Logs for STDERR. The python module to be imported by the Preupgrade Assistant modules written in Python has been renamed from preup to preupg . Additionally, the preup_ui_manage executable has been renamed to preupg-ui-manage . The exit_unknown function and the USDRESULT_UNKNOWN variable have been removed. Instead of the unknown result, set the error result by using the exit_error function. The set_component module API function has been removed. The component input parameter has been removed from the following module API functions: log_error , log_warning , log_info , and log_debug . (BZ# 1427713 , BZ# 1418697 , BZ# 1392901 , BZ#1393080, BZ#1372100, BZ#1372871) Preupgrade Assistant enables blacklisting to improve performance Preupgrade Assistant now supports creation of a blacklist file, which enables to skip all executable files on a path with a listed prefix. Users can activate this functionality in the /etc/preupgrade-assistant.conf file by setting the exclude_file value to the blacklist file name in the xccdf_preupg_rule_system_BinariesRebuild_check section. For example: Each line of the blacklist file contains a path prefix of executable files to be excluded. Previously, significant performance problems occured when a large partition was mounted and the RHEL6_7/system/BinariesRebuild module checked numerous files on a list of executables. Now, users can filter out unimportant executable files and thus reduce time the module consumes. Note that this feature is expected to be changed in the future. (BZ#1392018) Key file names unified in Preupgrade Assistant modules Previously, each module in Preupgrade Assistant used different file names for certain required files, which made testing and orientation complicated. With this update, the key file names have been unified to module.ini (the metadata INI file), check (the check script), and solution.txt (a solution text) in each of the modules. Additionaly, multiple rules (module IDs) have been renamed to conform with this change, so each rule now contains the unified _check suffix, for example, in the result.html and result.xml files. (BZ#1402478) A new RHDS module to check a possibility of an in-place upgrade of an RHDS system This update introduces a new Red Hat Directory Server (RHDS) module, which checks for relevant installed RHDS packages and gives users information about the possibility of an in-place upgrade of the RHDS system. As a result, if the relevant packages are installed, and the basic directory instance has been configured, the module creates a backup of the configuration files and prints information about them. (BZ#1406464) cloud-init moved to the Base channel As of Red Hat Enterprise Linux 6.9, the cloud-init package and its dependencies have been moved from the Red Hat Common channel to the Base channel. Cloud-init is a tool that handles early initialization of a system using metadata provided by the environment. It is typically used to configure servers booting in a cloud environment, such as OpenStack or Amazon Web Services. Note that the cloud-init package has not been updated since the latest version provided through the Red Hat Common channel. (BZ#1421281) | [
"[xccdf_preupg_rule_system_BinariesRebuild_check] exclude_file=/etc/pa_blacklist"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_release_notes/new_features_general_updates |
5. Authentication and Interoperability | 5. Authentication and Interoperability System Security Services Daemon (SSSD) The System Security Services Daemon (SSSD) implements a set of services for central management of identity and authentication. Centralizing identity and authentication services enables local caching of identities, allowing users to still identify in cases where the connection to the server is interrupted. SSSD supports many types of identity and authentication services, including: Red Hat Directory Server, OpenLDAP, 389, Kerberos and LDAP. SSSD in Red Hat Enterprise Linux 6.1 is updated to version 1.5, providing the following bug fixes and enhancements: Netgroups support Improved online/offline detection Improved LDAP access-control provider with support for shadow and authorizedService Improved caching and cleanup logic for different schemata Improved DNS based discovery Automatic Kerberos ticket renewal Enablement of the Kerberos FAST protocol Better handling of password expiration Note The Deployment Guide contains a section that describes how to install and configure SSSD. IPA Red Hat Enterprise Linux 6.1 features IPA as a Technology Preview. IPA is an integrated security information management solution which combines Red Hat Enterprise Linux, Red Hat Directory Server, MIT Kerberos, and NTP. It provides web browser and command-line interfaces, and its numerous administration tools allow an administrator to quickly install, set up, and administer one or more servers for centralized authentication and identity management. Note The Enterprise Identity Management Guide contains further information on the IPA Technology Preview. Samba Samba is an open source implementation of the Common Internet File System (CIFS) protocol. It allows the networking of Microsoft Windows, Linux, UNIX, and other operating systems together, enabling access to Windows-based file and printer shares. Samba in Red Hat Enterprise Linux 6.1 is updated to version 3.5.6. Samba in Red Hat Enterprise Linux 6.1 allows users to use their own Kerberos credentials when accessing CIFS mount, rather than needing the same mount credentials for all access to the mount. FreeRADIUS FreeRADIUS is an Internet authentication daemon, which implements the RADIUS protocol, as defined in RFC 2865 (and others). It allows Network Access Servers (NAS boxes) to perform authentication for dial-up users. FreeRADIUS in Red Hat Enterprise Linux 6.1 is updated to version 2.1.10. Kerberos Kerberos is a networked authentication system which allows users and computers to authenticate to each other with the help of a trusted third party, the KDC. In Red Hat Enterprise Linux 6.1, Kerberos (supplied by the krb5 package) is updated to version 1.9. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/interoperability |
Chapter 1. barbican | Chapter 1. barbican The following chapter contains information about the configuration options in the barbican service. 1.1. barbican.conf This section contains options for the /etc/barbican/barbican.conf file. 1.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/barbican/barbican.conf file. . Configuration option = Default value Type Description admin_role = admin string value Role used to identify an authenticated user as administrator. allow_anonymous_access = False boolean value Allow unauthenticated users to access the API with read-only privileges. This only applies when using ContextMiddleware. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. db_auto_create = True boolean value Create the Barbican database on service startup. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_limit_paging = 10 integer value Default page size for the limit paging URL parameter. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. host_href = http://localhost:9311 string value Host name, for use in HATEOAS-style references Note: Typically this would be the load balanced endpoint that clients would use to communicate back with this service. If a deployment wants to derive host from wsgi request instead then make this blank. Blank is needed to override default config value which is http://localhost:9311 `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_allowed_request_size_in_bytes = 15000 integer value Maximum allowed http request size against the barbican-api. max_allowed_secret_in_bytes = 10000 integer value Maximum allowed secret size in bytes. max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_limit_paging = 100 integer value Maximum page size for the limit paging URL parameter. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? sql_connection = sqlite:///barbican.sqlite string value SQLAlchemy connection string for the reference implementation registry server. Any valid SQLAlchemy connection string is fine. See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine . Note: For absolute addresses, use //// slashes after sqlite: . sql_idle_timeout = 3600 integer value Period in seconds after which SQLAlchemy should reestablish its connection to the database. MySQL uses a default wait_timeout of 8 hours, after which it will drop idle connections. This can result in MySQL Gone Away exceptions. If you notice this, you can lower this value to ensure that SQLAlchemy reconnects before MySQL can drop the connection. sql_max_retries = 60 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. sql_pool_class = QueuePool string value Accepts a class imported from the sqlalchemy.pool module, and handles the details of building the pool for you. If commented out, SQLAlchemy will select based on the database dialect. Other options are QueuePool (for SQLAlchemy-managed connections) and NullPool (to disabled SQLAlchemy management of connections). See http://docs.sqlalchemy.org/en/latest/core/pooling.html for more details sql_pool_logging = False boolean value Show SQLAlchemy pool-related debugging output in logs (sets DEBUG log level output) if specified. sql_pool_max_overflow = 10 integer value The maximum overflow size of the pool used by SQLAlchemy. When the number of checked-out connections reaches the size set in sql_pool_size, additional connections will be returned up to this limit. It follows then that the total number of simultaneous connections the pool will allow is sql_pool_size + sql_pool_max_overflow. Can be set to -1 to indicate no overflow limit, so no limit will be placed on the total number of concurrent connections. Comment out to allow SQLAlchemy to select the default. sql_pool_size = 5 integer value Size of pool used by SQLAlchemy. This is the largest number of connections that will be kept persistently in the pool. Can be set to 0 to indicate no size limit. To disable pooling, use a NullPool with sql_pool_class instead. Comment out to allow SQLAlchemy to select the default. sql_retry_interval = 1 integer value Interval between retries of opening a SQL connection. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. 1.1.2. certificate The following table outlines the options available under the [certificate] group in the /etc/barbican/barbican.conf file. Table 1.1. certificate Configuration option = Default value Type Description enabled_certificate_plugins = ['simple_certificate'] multi valued List of certificate plugins to load. namespace = barbican.certificate.plugin string value Extension namespace to search for plugins. 1.1.3. certificate_event The following table outlines the options available under the [certificate_event] group in the /etc/barbican/barbican.conf file. Table 1.2. certificate_event Configuration option = Default value Type Description enabled_certificate_event_plugins = ['simple_certificate_event'] multi valued List of certificate plugins to load. namespace = barbican.certificate.event.plugin string value Extension namespace to search for eventing plugins. 1.1.4. cors The following table outlines the options available under the [cors] group in the /etc/barbican/barbican.conf file. Table 1.3. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Project-Id', 'X-Identity-Status', 'X-User-Id', 'X-Storage-Token', 'X-Domain-Id', 'X-User-Domain-Id', 'X-Project-Domain-Id', 'X-Roles'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 1.1.5. crypto The following table outlines the options available under the [crypto] group in the /etc/barbican/barbican.conf file. Table 1.4. crypto Configuration option = Default value Type Description enabled_crypto_plugins = ['simple_crypto'] multi valued List of crypto plugins to load. namespace = barbican.crypto.plugin string value Extension namespace to search for plugins. 1.1.6. dogtag_plugin The following table outlines the options available under the [dogtag_plugin] group in the /etc/barbican/barbican.conf file. Table 1.5. dogtag_plugin Configuration option = Default value Type Description auto_approved_profiles = caServerCert string value List of automatically approved enrollment profiles ca_expiration_time = 1 string value Time in days for CA entries to expire dogtag_host = localhost string value Hostname for the Dogtag instance dogtag_port = 8443 port value Port for the Dogtag instance nss_db_path = /etc/barbican/alias string value Path to the NSS certificate database nss_password = None string value Password for the NSS certificate databases pem_path = /etc/barbican/kra_admin_cert.pem string value Path to PEM file for authentication plugin_name = Dogtag KRA string value User friendly plugin name plugin_working_dir = /etc/barbican/dogtag string value Working directory for Dogtag plugin retries = 3 integer value Retries when storing or generating secrets simple_cmc_profile = caOtherCert string value Profile for simple CMC requests 1.1.7. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/barbican/barbican.conf file. Table 1.6. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 1.1.8. keystone_notifications The following table outlines the options available under the [keystone_notifications] group in the /etc/barbican/barbican.conf file. Table 1.7. keystone_notifications Configuration option = Default value Type Description allow_requeue = False boolean value True enables requeue feature in case of notification processing error. Enable this only when underlying transport supports this feature. control_exchange = keystone string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. enable = False boolean value True enables keystone notification listener functionality. thread_pool_size = 10 integer value Define the number of max threads to be used for notification server processing functionality. topic = notifications string value Keystone notification queue topic name. This name needs to match one of values mentioned in Keystone deployment's notification_topics configuration e.g. notification_topics=notifications, barbican_notificationsMultiple servers may listen on a topic and messages will be dispatched to one of the servers in a round-robin fashion. That's why Barbican service should have its own dedicated notification queue so that it receives all of Keystone notifications. version = 1.0 string value Version of tasks invoked via notifications 1.1.9. kmip_plugin The following table outlines the options available under the [kmip_plugin] group in the /etc/barbican/barbican.conf file. Table 1.8. kmip_plugin Configuration option = Default value Type Description ca_certs = None string value File path to concatenated "certification authority" certificates certfile = None string value File path to local client certificate host = localhost string value Address of the KMIP server keyfile = None string value File path to local client certificate keyfile password = None string value Password for authenticating with KMIP server pkcs1_only = False boolean value Only support PKCS#1 encoding of asymmetric keys plugin_name = KMIP HSM string value User friendly plugin name port = 5696 port value Port for the KMIP server ssl_version = PROTOCOL_TLSv1_2 string value SSL version, maps to the module ssl's constants username = None string value Username for authenticating with KMIP server 1.1.10. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/barbican/barbican.conf file. Table 1.9. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 1.1.11. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/barbican/barbican.conf file. Table 1.10. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate 1.1.12. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/barbican/barbican.conf file. Table 1.11. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 1.1.13. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/barbican/barbican.conf file. Table 1.12. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value EXPERIMENTAL: Run the health check heartbeat threadthrough a native python thread. By default if thisoption isn't provided the health check heartbeat willinherit the execution model from the parent process. Byexample if the parent process have monkey patched thestdlib by using eventlet/greenlet then the heartbeatwill be run through a green thread. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 1.1.14. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/barbican/barbican.conf file. Table 1.13. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 1.1.15. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/barbican/barbican.conf file. Table 1.14. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 1.1.16. p11_crypto_plugin The following table outlines the options available under the [p11_crypto_plugin] group in the /etc/barbican/barbican.conf file. Table 1.15. p11_crypto_plugin Configuration option = Default value Type Description aes_gcm_generate_iv = True boolean value Generate IVs for CKM_AES_GCM mechanism. always_set_cka_sensitive = True boolean value Always set CKA_SENSITIVE=CK_TRUE including CKA_EXTRACTABLE=CK_TRUE keys. encryption_mechanism = CKM_AES_CBC string value Secret encryption mechanism hmac_key_type = CKK_AES string value HMAC Key Type hmac_keygen_mechanism = CKM_AES_KEY_GEN string value HMAC Key Generation Algorithm hmac_keywrap_mechanism = CKM_SHA256_HMAC string value HMAC key wrap mechanism hmac_label = None string value Master HMAC Key label (as stored in the HSM) library_path = None string value Path to vendor PKCS11 library login = None string value Password to login to PKCS11 session mkek_label = None string value Master KEK label (as stored in the HSM) mkek_length = None integer value Master KEK length in bytes. os_locking_ok = False boolean value Enable CKF_OS_LOCKING_OK flag when initializing the PKCS#11 client library. pkek_cache_limit = 100 integer value Project KEK Cache Item Limit pkek_cache_ttl = 900 integer value Project KEK Cache Time To Live, in seconds pkek_length = 32 integer value Project KEK length in bytes. plugin_name = PKCS11 HSM string value User friendly plugin name rw_session = True boolean value Flag for Read/Write Sessions `seed_file = ` string value File to pull entropy for seeding RNG seed_length = 32 integer value Amount of data to read from file for seed slot_id = 1 integer value (Optional) HSM Slot ID that contains the token device to be used. token_label = None string value DEPRECATED: Use token_labels instead. Token label used to identify the token to be used. token_labels = None list value List of labels for one or more tokens to be used. Typically this is a single label, but some HSM devices may require more than one label for Load Balancing or High Availability configurations. token_serial_number = None string value Token serial number used to identify the token to be used. 1.1.17. queue The following table outlines the options available under the [queue] group in the /etc/barbican/barbican.conf file. Table 1.16. queue Configuration option = Default value Type Description asynchronous_workers = 1 integer value Number of asynchronous worker processes enable = False boolean value True enables queuing, False invokes workers synchronously namespace = barbican string value Queue namespace server_name = barbican.queue string value Server name for RPC task processing server topic = barbican.workers string value Queue topic name version = 1.1 string value Version of tasks invoked via queue 1.1.18. quotas The following table outlines the options available under the [quotas] group in the /etc/barbican/barbican.conf file. Table 1.17. quotas Configuration option = Default value Type Description quota_cas = -1 integer value Number of CAs allowed per project quota_consumers = -1 integer value Number of consumers allowed per project quota_containers = -1 integer value Number of containers allowed per project quota_orders = -1 integer value Number of orders allowed per project quota_secrets = -1 integer value Number of secrets allowed per project 1.1.19. retry_scheduler The following table outlines the options available under the [retry_scheduler] group in the /etc/barbican/barbican.conf file. Table 1.18. retry_scheduler Configuration option = Default value Type Description initial_delay_seconds = 10.0 floating point value Seconds (float) to wait before starting retry scheduler periodic_interval_max_seconds = 10.0 floating point value Seconds (float) to wait between periodic schedule events 1.1.20. secretstore The following table outlines the options available under the [secretstore] group in the /etc/barbican/barbican.conf file. Table 1.19. secretstore Configuration option = Default value Type Description enable_multiple_secret_stores = False boolean value Flag to enable multiple secret store plugin backend support. Default is False enabled_secretstore_plugins = ['store_crypto'] multi valued List of secret store plugins to load. namespace = barbican.secretstore.plugin string value Extension namespace to search for plugins. stores_lookup_suffix = None list value List of suffix to use for looking up plugins which are supported with multiple backend support. 1.1.21. simple_crypto_plugin The following table outlines the options available under the [simple_crypto_plugin] group in the /etc/barbican/barbican.conf file. Table 1.20. simple_crypto_plugin Configuration option = Default value Type Description kek = dGhpcnR5X3R3b19ieXRlX2tleWJsYWhibGFoYmxhaGg= string value Key encryption key to be used by Simple Crypto Plugin plugin_name = Software Only Crypto string value User friendly plugin name 1.1.22. snakeoil_ca_plugin The following table outlines the options available under the [snakeoil_ca_plugin] group in the /etc/barbican/barbican.conf file. Table 1.21. snakeoil_ca_plugin Configuration option = Default value Type Description ca_cert_chain_path = None string value Path to CA certificate chain file ca_cert_key_path = None string value Path to CA certificate key file ca_cert_path = None string value Path to CA certificate file ca_cert_pkcs7_path = None string value Path to CA chain pkcs7 file subca_cert_key_directory = /etc/barbican/snakeoil-cas string value Directory in which to store certs/keys for subcas 1.1.23. ssl The following table outlines the options available under the [ssl] group in the /etc/barbican/barbican.conf file. Table 1.22. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuration_reference/barbican |
Chapter 1. Insights client overview | Chapter 1. Insights client overview The Insights client ( insights-client ) is the client for Red Hat Insights for Red Hat Enterprise Linux. Run insights-client from the command line. 1.1. Red Hat Insights client distribution Insights client is available for the following releases of Red Hat Enterprise Linux (RHEL). RHEL release Comments RHEL 9 Distributed with Insights client pre-installed. RHEL 8 Distributed with Insights client pre-installed, unless RHEL 8 was installed as a minimal installation. RHEL 7 Distributed with the Insights client RPM package loaded but not installed. RHEL 6.10 and later You must download the Insights client RPM package and install it. Note Insights client installation on older versions RHEL versions 6 and 7 do not come with the Insights client pre-installed. If you have one of these versions, run the following commands in your terminal: Then, register the system to Red Hat Insights for Red Hat Enterprise Linux: Additional resources Getting Started with Insights | [
"yum install insights-client",
"insights-client --register"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights/assembly-client-cg-overview |
22.16. Configure NTP | 22.16. Configure NTP To change the default configuration of the NTP service, use a text editor running as root user to edit the /etc/ntp.conf file. This file is installed together with ntpd and is configured to use time servers from the Red Hat pool by default. The man page ntp.conf(5) describes the command options that can be used in the configuration file apart from the access and rate limiting commands which are explained in the ntp_acc(5) man page. 22.16.1. Configure Access Control to an NTP Service To restrict or control access to the NTP service running on a system, make use of the restrict command in the ntp.conf file. See the commented out example: The restrict command takes the following form: restrict address mask option where address and mask specify the IP addresses to which you want to apply the restriction, and option is one or more of: ignore - All packets will be ignored, including ntpq and ntpdc queries. kod - a " Kiss-o'-death " packet is to be sent to reduce unwanted queries. limited - do not respond to time service requests if the packet violates the rate limit default values or those specified by the discard command. ntpq and ntpdc queries are not affected. For more information on the discard command and the default values, see Section 22.16.2, "Configure Rate Limiting Access to an NTP Service" . lowpriotrap - traps set by matching hosts to be low priority. nomodify - prevents any changes to the configuration. noquery - prevents ntpq and ntpdc queries, but not time queries, from being answered. nopeer - prevents a peer association being formed. noserve - deny all packets except ntpq and ntpdc queries. notrap - prevents ntpdc control message protocol traps. notrust - deny packets that are not cryptographically authenticated. ntpport - modify the match algorithm to only apply the restriction if the source port is the standard NTP UDP port 123 . version - deny packets that do not match the current NTP version. To configure rate limit access to not respond at all to a query, the respective restrict command has to have the limited option. If ntpd should reply with a KoD packet, the restrict command needs to have both limited and kod options. The ntpq and ntpdc queries can be used in amplification attacks (see CVE-2013-5211 for more details), do not remove the noquery option from the restrict default command on publicly accessible systems. | [
"Hosts on local network are less restricted. #restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-configure_ntp |
Chapter 2. OpenShift CLI (oc) | Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Dedicated projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Dedicated operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Dedicated clusters from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Dedicated 4. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Dedicated downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Dedicated downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Dedicated downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Dedicated clusters from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Dedicated 4. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Download the latest version of the oc CLI for your operating system from the Downloads page on OpenShift Cluster Manager. Extract the oc binary file from the downloaded archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Dedicated subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Dedicated subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Dedicated 4. # subscription-manager repos --enable="rhocp-4-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Dedicated cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Dedicated server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Dedicated CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an OpenShift Dedicated cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Dedicated server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the OpenShift Dedicated web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and OpenShift Dedicated resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Dedicated is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Dedicated , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Dedicated features, including: Full support for OpenShift Dedicated resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Dedicated distributions, and build upon standard Kubernetes primitives. Authentication Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Dedicated 4 . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Dedicated server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Dedicated users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Dedicated cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of an OpenShift Dedicated server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Dedicated servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, OpenShift Dedicated creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Dedicated clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.6.1. OpenShift CLI (oc) developer commands 2.6.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.6.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.6.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.6.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.6.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.6.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.6.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.6.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.6.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account "foo" of namespace "dev" can list pods # in the namespace "prod". # You must be allowed to use impersonation for the global option "--as". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.6.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.6.1.11. oc auth whoami Experimental: Check self subject attributes Example usage # Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json 2.6.1.12. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.6.1.13. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.6.1.14. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.6.1.15. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.6.1.16. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # oc shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.6.1.17. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.6.1.18. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.6.1.19. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.6.1.20. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.6.1.21. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.6.1.22. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.6.1.23. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.6.1.24. oc config new-admin-kubeconfig Generate, make the server trust, and display a new admin.kubeconfig Example usage # Generate a new admin kubeconfig oc config new-admin-kubeconfig 2.6.1.25. oc config new-kubelet-bootstrap-kubeconfig Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig Example usage # Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig 2.6.1.26. oc config refresh-ca-bundle Update the OpenShift CA bundle by contacting the API server Example usage # Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run 2.6.1.27. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.6.1.28. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.6.1.29. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.6.1.30. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.6.1.31. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the "cluster-admin" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.6.1.32. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.6.1.33. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.6.1.34. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.6.1.35. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.6.1.36. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.6.1.37. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.6.1.38. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.6.1.39. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.6.1.40. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.6.1.41. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.42. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.6.1.43. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx 2.6.1.44. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.6.1.45. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.6.1.46. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.6.1.47. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.6.1.48. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.6.1.49. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.6.1.50. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.6.1.51. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.6.1.52. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.6.1.53. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.6.1.54. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.6.1.55. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev 2.6.1.56. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.6.1.57. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.6.1.58. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.6.1.59. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.6.1.60. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.6.1.61. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key 2.6.1.62. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.6.1.63. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.6.1.64. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.6.1.65. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.6.1.66. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.6.1.67. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.6.1.68. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.6.1.69. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.6.1.70. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.6.1.71. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.6.1.72. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.6.1.73. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.6.1.74. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status' 2.6.1.75. oc events List events Example usage # List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal 2.6.1.76. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.6.1.77. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2 2.6.1.78. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.6.1.79. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.6.1.80. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status 2.6.1.81. oc get-token Experimental: Get token from external OIDC issuer as credentials exec plugin Example usage # Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343 2.6.1.82. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.6.1.83. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz 2.6.1.84. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.6.1.85. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.6.1.86. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=linux/386 \ --keep-manifest-list=true 2.6.1.87. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.6.1.88. oc kustomize Build a kustomization target from a directory or URL Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.6.1.89. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.6.1.90. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080 2.6.1.91. oc logout End the current server session Example usage # Log out oc logout 2.6.1.92. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.6.1.93. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.6.1.94. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.6.1.95. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.6.1.96. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.6.1.97. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.6.1.98. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.6.1.99. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.6.1.100. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.6.1.101. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.6.1.102. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.6.1.103. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.6.1.104. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.6.1.105. oc projects Display existing projects Example usage # List all projects oc projects 2.6.1.106. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.6.1.107. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.6.1.108. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.6.1.109. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.6.1.110. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.6.1.111. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.6.1.112. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.6.1.113. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.6.1.114. oc rollout restart Restart a resource Example usage # Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.6.1.115. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.6.1.116. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.6.1.117. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.6.1.118. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.6.1.119. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled 2.6.1.120. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.6.1.121. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.6.1.122. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.6.1.123. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.6.1.124. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.6.1.125. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.6.1.126. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.6.1.127. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.6.1.128. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.6.1.129. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.6.1.130. oc set image Update the image of a pod template Example usage # Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.6.1.131. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.6.1.132. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.6.1.133. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.6.1.134. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.6.1.135. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.6.1.136. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.6.1.137. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.6.1.138. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.6.1.139. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.6.1.140. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.6.1.141. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.6.1.142. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.6.1.143. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client 2.6.1.144. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod "busybox1" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1 # Wait for the service "loadbalancer" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.6.1.145. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) administrator commands 2.7.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.7.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true 2.7.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.7.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.7.1.5. oc adm copy-to-node Copy specified files to the node Example usage # Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0 2.7.1.6. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.7.1.7. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.7.1.8. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.7.1.9. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.7.1.10. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.7.1.11. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.7.1.12. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.7.1.13. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.7.1.14. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.15. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.7.1.16. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.7.1.17. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.7.1.18. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s) Example usage # Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.7.1.19. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.7.1.20. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.7.1.21. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.7.1.22. oc adm node-image create Create an ISO image for booting the nodes to be added to the target cluster Example usage # Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda 2.7.1.23. oc adm node-image monitor Monitor new nodes being added to an OpenShift cluster Example usage # Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84 2.7.1.24. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron 2.7.1.25. oc adm ocp-certificates monitor-certificates Watch platform certificates Example usage # Watch platform certificates oc adm ocp-certificates monitor-certificates 2.7.1.26. oc adm ocp-certificates regenerate-leaf Regenerate client and serving certificates of an OpenShift cluster Example usage # Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key 2.7.1.27. oc adm ocp-certificates regenerate-machine-config-server-serving-cert Regenerate the machine config operator certificates in an OpenShift cluster Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.28. oc adm ocp-certificates regenerate-top-level Regenerate the top level certificates in an OpenShift cluster Example usage # Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key 2.7.1.29. oc adm ocp-certificates remove-old-trust Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster Example usage # Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z 2.7.1.30. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server Update user-data secrets in an OpenShift cluster to use updated MCO certfs Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server 2.7.1.31. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.7.1.32. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.7.1.33. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.7.1.34. oc adm policy add-cluster-role-to-group Add a role to groups for all projects in the cluster Example usage # Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins 2.7.1.35. oc adm policy add-cluster-role-to-user Add a role to users for all projects in the cluster Example usage # Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser 2.7.1.36. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.7.1.37. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.7.1.38. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.7.1.39. oc adm policy remove-cluster-role-from-group Remove a role from groups for all projects in the cluster Example usage # Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins 2.7.1.40. oc adm policy remove-cluster-role-from-user Remove a role from users for all projects in the cluster Example usage # Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser 2.7.1.41. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.7.1.42. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.43. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.7.1.44. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.7.1.45. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.7.1.46. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.7.1.47. oc adm prune renderedmachineconfigs Prunes rendered MachineConfigs in an OpenShift cluster Example usage # See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm 2.7.1.48. oc adm prune renderedmachineconfigs list Lists rendered MachineConfigs in an OpenShift cluster Example usage # List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use 2.7.1.49. oc adm reboot-machine-config-pool Initiate reboot of the specified MachineConfigPool Example usage # Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master 2.7.1.50. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.51. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.7.1.52. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.7.1.53. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.7.1.54. oc adm restart-kubelet Restart kubelet on the specified nodes Example usage # Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig 2.7.1.55. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.7.1.56. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.7.1.57. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.7.1.58. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.7.1.59. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.7.1.60. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.7.1.61. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.7.1.62. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.7.1.63. oc adm wait-for-node-reboot Wait for nodes to reboot after running oc adm reboot-machine-config-pool Example usage # Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4 2.7.1.64. oc adm wait-for-stable-cluster Wait for the platform operators to become stable Example usage # Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m 2.7.2. Additional resources OpenShift CLI developer command reference | [
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhocp-4-for-rhel-8-x86_64-rpms\"",
"yum install openshift-clients",
"oc <command>",
"brew install openshift-cli",
"oc <command>",
"oc login -u user1",
"Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.",
"oc login <cluster_url> --web 1",
"Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.",
"Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>",
"oc new-project my-project",
"Now using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc new-app https://github.com/sclorg/cakephp-ex",
"--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.",
"oc get pods -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>",
"oc logs cakephp-ex-1-deploy",
"--> Scaling cakephp-ex-1 to 1 --> Success",
"oc project",
"Using project \"my-project\" on server \"https://openshift.example.com:6443\".",
"oc status",
"In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.",
"oc api-resources",
"NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap",
"oc help",
"OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application",
"oc create --help",
"Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]",
"oc explain pods",
"KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources",
"oc logout",
"Logged \"user1\" out on \"https://openshift.example.com\"",
"oc completion bash > oc_bash_completion",
"sudo cp oc_bash_completion /etc/bash_completion.d/",
"cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF",
"apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k",
"oc status",
"status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.",
"oc project",
"Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".",
"oc project alice-project",
"Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".",
"oc login -u system:admin -n default",
"oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]",
"oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]",
"oc config use-context <context_nickname>",
"oc config set <property_name> <property_value>",
"oc config unset <property_name>",
"oc config view",
"oc config view --config=<specific_filename>",
"oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config view",
"apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0",
"oc config set-context `oc config current-context` --namespace=<project_name>",
"oc whoami -c",
"#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"",
"chmod +x <plugin_file>",
"sudo mv <plugin_file> /usr/local/bin/.",
"oc plugin list",
"The following compatible plugins are available: /usr/local/bin/<plugin_file>",
"oc ns",
"Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-",
"Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io",
"Print the supported API versions oc api-versions",
"Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap",
"Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json",
"Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true",
"View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json",
"Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx",
"Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo",
"Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml",
"Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json",
"Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80",
"Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new",
"Print the address of the control plane and cluster services oc cluster-info",
"Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state",
"Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # oc shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE",
"Display the current-context oc config current-context",
"Delete the minikube cluster oc config delete-cluster minikube",
"Delete the context for the minikube cluster oc config delete-context minikube",
"Delete the minikube user oc config delete-user minikube",
"List the clusters that oc knows about oc config get-clusters",
"List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context",
"List the users that oc knows about oc config get-users",
"Generate a new admin kubeconfig oc config new-admin-kubeconfig",
"Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig",
"Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's API server oc config refresh-ca-bundle --dry-run",
"Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name",
"Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true",
"Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4",
"Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin",
"Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Enable new exec auth plugin for the \"cluster-admin\" entry with interactive mode oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 --exec-interactive-mode=Never # Define new exec auth plugin arguments for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-",
"Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace",
"Use the context for the minikube cluster oc config use-context minikube",
"Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'",
"!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar",
"Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json",
"Create a new build oc create build myapp",
"Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10",
"Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"",
"Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1",
"Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date",
"Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 # Create a deployment named my-dep that runs multiple containers oc create deployment my-dep --image=busybox:latest --image=ubuntu:latest --image=nginx",
"Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx",
"Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones",
"Create a new image stream oc create imagestream mysql",
"Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0",
"Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"",
"Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob",
"Create a new namespace named my-namespace oc create namespace my-namespace",
"Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%",
"Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"",
"Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort",
"Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status",
"Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev",
"Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets",
"Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com",
"Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend",
"If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json",
"Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env",
"Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.crt --key=path/to/tls.key",
"Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"",
"Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com",
"Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080",
"Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080",
"Create a new service account named my-service-account oc create serviceaccount my-service-account",
"Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc",
"Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"",
"Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones",
"Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows node # Note: the chosen image must match the Windows Server version (2019, 2022) of the node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns",
"Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all",
"Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend",
"Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -",
"Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status'",
"List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal",
"Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date",
"Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2",
"Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx",
"Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf",
"List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status",
"Starts an auth code flow to the issuer URL with the client ID and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an auth code flow to the issuer URL with a different callback address oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343",
"Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt",
"Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz",
"Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]",
"Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64",
"Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all exist # You must use a registry with sparse registry support enabled oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=linux/386 --keep-manifest-list=true",
"Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm",
"Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6",
"Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-",
"Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 # Log in to the external OIDC issuer through Auth Code + PKCE by starting a local server listening on port 8080 oc login localhost:8443 --exec-plugin=oc-oidc --client-id=client-id --extra-scopes=email,profile --callback-port=8080",
"Log out oc logout",
"Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container",
"List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml",
"Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp",
"Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"",
"Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh",
"Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'",
"List all available plugins oc plugin list",
"Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml",
"Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000",
"Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -",
"Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project",
"List all projects oc projects",
"To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api",
"Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS",
"Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json",
"Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json",
"Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx",
"View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3",
"Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json",
"Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx",
"Restart all deployments in test-namespace namespace oc rollout restart deployment -n test-namespace # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx",
"Resume an already paused deployment oc rollout resume dc/nginx",
"Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend",
"Watch the status of the latest rollout oc rollout status dc/nginx",
"Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3",
"Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled",
"Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir",
"Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>",
"Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web",
"Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount",
"Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name",
"Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"",
"Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret",
"Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir",
"Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh",
"Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp",
"Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml",
"Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all",
"Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30",
"Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml",
"Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero",
"Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -",
"Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml",
"Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml",
"Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main",
"List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>",
"Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait",
"See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest",
"Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d",
"Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in JSON format oc version --output json # Print the OpenShift client version information for the current context oc version --client",
"Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for pod \"busybox1\" to be Ready oc wait --for='jsonpath={.status.conditions[?(@.type==\"Ready\")].status}=True' pod/busybox1 # Wait for the service \"loadbalancer\" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s",
"Display the currently authenticated user oc whoami",
"Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all",
"Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true",
"Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp",
"Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp",
"Copy a new bootstrap kubeconfig file to node-0 oc adm copy-to-node --copy=new-bootstrap-kubeconfig=/etc/kubernetes/kubeconfig node/node-0",
"Mark node \"foo\" as unschedulable oc adm cordon foo",
"Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml",
"Output a template for the error page to stdout oc adm create-error-template",
"Output a template for the login page to stdout oc adm create-login-template",
"Output a template for the provider selection page to stdout oc adm create-provider-selection-template",
"Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900",
"Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2",
"Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name",
"Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2",
"Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm",
"Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions",
"Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir",
"Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm",
"Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh",
"Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'",
"Create the ISO image and download it in the current folder oc adm node-image create # Use a different assets folder oc adm node-image create --dir=/tmp/assets # Specify a custom image name oc adm node-image create -o=my-node.iso # Create an ISO to add a single node without using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb # Create an ISO to add a single node with a root device hint and without # using the configuration file oc adm node-image create --mac-address=00:d8:e7:c7:4b:bb --root-device-hint=deviceName:/dev/sda",
"Monitor a single node being added to a cluster oc adm node-image monitor --ip-addresses 192.168.111.83 # Monitor multiple nodes being added to a cluster by separating each IP address with a comma oc adm node-image monitor --ip-addresses 192.168.111.83,192.168.111.84",
"Show kubelet logs from all control plane nodes oc adm node-logs --role master -u kubelet # See what logs are available in control plane nodes in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all control plane nodes oc adm node-logs --role master --path=cron",
"Watch platform certificates oc adm ocp-certificates monitor-certificates",
"Regenerate a leaf certificate contained in a particular secret oc adm ocp-certificates regenerate-leaf -n openshift-config-managed secret/kube-controller-manager-client-cert-key",
"Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server",
"Regenerate the signing certificate contained in a particular secret oc adm ocp-certificates regenerate-top-level -n openshift-kube-apiserver-operator secret/loadbalancer-serving-signer-key",
"Remove a trust bundled contained in a particular config map oc adm ocp-certificates remove-old-trust -n openshift-config-managed configmaps/kube-apiserver-aggregator-client-ca --created-before 2023-06-05T14:44:06Z # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z",
"Regenerate the MCO certs without modifying user-data secrets oc adm ocp-certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server",
"Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'",
"Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'",
"Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'",
"Add the 'cluster-admin' cluster role to the 'cluster-admins' group oc adm policy add-cluster-role-to-group cluster-admin cluster-admins",
"Add the 'system:build-strategy-docker' cluster role to the 'devuser' user oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser",
"Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1",
"Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2",
"Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1",
"Remove the 'cluster-admin' cluster role from the 'cluster-admins' group oc adm policy remove-cluster-role-from-group cluster-admin cluster-admins",
"Remove the 'system:build-strategy-docker' cluster role from the 'devuser' user oc adm policy remove-cluster-role-from-user system:build-strategy-docker devuser",
"Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml",
"Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml",
"Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm",
"Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm",
"Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm",
"See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm",
"See what the prune command would delete if run with no options oc adm prune renderedmachineconfigs # To actually perform the prune operation, the confirm flag must be appended oc adm prune renderedmachineconfigs --confirm # See what the prune command would delete if run on the worker MachineConfigPool oc adm prune renderedmachineconfigs --pool-name=worker # Prunes 10 oldest rendered MachineConfigs in the cluster oc adm prune renderedmachineconfigs --count=10 --confirm # Prunes 10 oldest rendered MachineConfigs in the cluster for the worker MachineConfigPool oc adm prune renderedmachineconfigs --count=10 --pool-name=worker --confirm",
"List all rendered MachineConfigs for the worker MachineConfigPool in the cluster oc adm prune renderedmachineconfigs list --pool-name=worker # List all rendered MachineConfigs in use by the cluster's MachineConfigPools oc adm prune renderedmachineconfigs list --in-use",
"Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master",
"Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x",
"Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x",
"Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature",
"Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11",
"Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig",
"Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule",
"Show usage statistics for images oc adm top images",
"Show usage statistics for image streams oc adm top imagestreams",
"Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME",
"Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel",
"Mark node \"foo\" as schedulable oc adm uncordon foo",
"View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true",
"Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all",
"Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4",
"Wait for all cluster operators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cli_tools/openshift-cli-oc |
23.5. Creating an LDIF File with Nested Example Entries | 23.5. Creating an LDIF File with Nested Example Entries Use the dsctl ldifgen nested command to create an LDIF file that contains a heavily nested cascading fractal structure. For example, to create an LDIF file named /tmp/nested.nldif , that adds 600 users in total in different organization units (OU) under the dc=example,dc=com entry, with each OU containing a maximum number of 100 users: For further details about the options, enter: | [
"dsctl instance_name ldifgen nested --num-users 600 --node-limit 100 --suffix \"dc=example,dc=com\"",
"dsctl instance_name ldifgen nested --help"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/crating-an-ldif-file-with-nested-example-entries |
Chapter 3. KVM Guest Virtual Machine Compatibility | Chapter 3. KVM Guest Virtual Machine Compatibility To verify whether your processor supports the virtualization extensions and for information on enabling the virtualization extensions if they are disabled, refer to the Red Hat Enterprise Linux Virtualization Administration Guide . 3.1. Red Hat Enterprise Linux 6 Support Limits Red Hat Enterprise Linux 6 servers have certain support limits. The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux: For host systems: http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-and-limits For hypervisors: http://www.redhat.com/resourcelibrary/articles/virtualization-limits-rhel-hypervisors Note Red Hat Enterprise Linux 6.5 now supports 4TiB of memory per KVM guest. The following URL is a complete reference showing supported operating systems and host and guest combinations: http://www.redhat.com/resourcelibrary/articles/enterprise-linux-virtualization-support | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-virtualization_host_configuration_and_guest_installation_guide-kvm_compatibility |
Chapter 2. Installing Red Hat Insights for Red Hat Enterprise Linux (RHEL) | Chapter 2. Installing Red Hat Insights for Red Hat Enterprise Linux (RHEL) This document provides starting points and resources for registering systems to Red Hat Insights for Red Hat Enterprise Linux. Installation of Red Hat Insights typically involves installing the Insights client, then registering systems for use with Insights. You can use different methods to register and install Insights. A registration assistant is also available to guide you through the process of registering and installing Insights. You can also use the Remote Host Configuration (RHC) tool. The installation method you use can depend on conditions such as, Whether you are connecting to Red Hat for the first time Whether you use a certain version of RHEL Whether you want to do an automated installation or manual install Other factors 2.1. Installing Red Hat Insights on Red Hat Enterprise Linux Satellite-managed hosts To install Insights on Red Hat Enterprise Linux hosts managed by Red Hat Satellite, see: Creating a Host in Red Hat Satellite Using Ansible roles to automate repetitive tasks on clients Monitoring Hosts Using Red Hat Insights 2.2. Registering and configuring Satellite Server integration with FedRAMP Before you can use Insights with your server, you need to connect your servers to the Satellite Server. The Satellite Server enables your servers to communicate with Red Hat Insights. An IP address-based allow list restricts network access to the Insights service. This ensures that only the servers and ports that you specify can connect to the Satellite Server. Note Red Hat Insights subscription services are currently not available in the FedRAMP environment. Red Hat continuously evaluates service offerings, and will announce any updates or expansions to the FedRAMP environment as they become available. Note The following requirements are in addition to existing Satellite Server connectivity requirements to the Red Hat Content Delivery Network and Red Hat Subscription Management (RHSM) for software updates. For more information about connectivity requirements, refer to How to access Red Hat Subscription Manager (RHSM) through a firewall or proxy . Prerequisites The Satellite Server must be able to connect to the domain mtls.console.stage.openshiftusgov.com , using the HTTPS protocol on port 443. You must provide a static public egress IP address (or address range) from which Satellite traffic will originate. Note Contact Red Hat Support to set up the public egress IP address. The public egress IP address is an additional IP address on the primary network interface of your server. You are logged in to the Hybrid Cloud Console ( https://console.openshiftusgov.com ) as an Organization Administrator. You have administrator ssh access to the Satellite server. You are logged in to the Satellite Server using ssh . Procedure From the main menu, navigate to Inventory > Configure Satellites . The Configure Satellites page displays. Click Generate Token to create the registration token for your organization. Copy the token. Open a terminal window on your Satellite Server and enter the following command: # hammer organization list The system returns your organization ID. Make note of it for the step. Copy the command shown in Step 3 on the Configure Satellites page. Paste it into the terminal. Substitute the organization ID for <organization_id> . # SATELLITE_RH_CLOUD_URL=https://mtls.console.openshiftusgov.com org_id=<organization_id> foreman-rake rh_cloud:hybridcloud_register The system returns a prompt for the token that you generated. Paste the generated token that you copied at the prompt and press Enter . The system returns a success message. You can now register the system with Satellite and run insights-client . Additional resources Hammer CLI Guide Client Configuration Guide for Insights with FedRAMP Installing the Satellite server in a disconnected network 2.3. Managing trusted IP addresses with an IP allowlist Before you can connect Insights to your Satellite servers, you need to configure an allowlist that contains a trusted IP address (or range of IP addresses). You can configure the allowlist in two ways: Provide the trusted IP address (or addresses) to Red Hat stateside support during onboarding. Support uses the IP addresses to configure an allowlist for Insights. This allowlist allows network traffic from your Satellite-controlled environment into Insights. To configure the allowlist, contact stateside support through ServiceNow and mention that you want to connect your satellite servers to Insights. If you have not created the allowlist during onboarding, use the IP allowlist in the Manage Satellites page in the Red Hat Hybrid Cloud Console to manually add trusted IP addresses. 2.3.1. Adding trusted IP addresses to an allowlist You can use Manage Satellites to create an allowlist, or add an IP address (or a range of IP addresses) to an existing allowlist. Adding IP addresses enables additional FedRAMP users in your organization to access the Red Hat Hybrid Cloud Console. Note Manage Satellites allows only IPv4 addresses. It does not support IPv6 addresses. To add a range of IP addresses, use CIDR notation (for example, 226.167.71.76/32). Prerequisites You have Organization Administrator permissions. You are logged in to the Hybrid Cloud Console. Procedure Click Manage Satellites . The Manage Satellites page displays. Scroll down the page to the IP Address Allowlist section at the bottom. Click Add IP Addresses . The Add IP Addresses to Allowlist dialog box displays. Type an IP address (or range of IP addresses) and click Submit . The IP addresses appear on the allowlist. 2.3.2. Removing IP addresses from the allowlist Prerequisites You have Organization Administrator permissions. You are logged in to the Hybrid Cloud Console. You have an IP allowlist configured. You have added at least one IP address (or range of IP addresses) to the allowlist. Procedure Click Manage Satellites . The Manage Satellites page displays. Scroll down the page to the IP Address Allowlist section at the bottom. Select the IP address you want to remove, and then click Remove . The Remove IP Addresses from Allowlist dialog box displays. Click Remove , and then click Submit . Additional resources For more information about the Insights onboarding process, refer to Registering and managing Satellite server integration with FedRAMP . For more information about using Manage Satellites to connect to Satellite servers, see Registering and managing Satellite server integration with FedRAMP | [
"hammer organization list",
"SATELLITE_RH_CLOUD_URL=https://mtls.console.openshiftusgov.com org_id=<organization_id> foreman-rake rh_cloud:hybridcloud_register"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/getting_started_with_red_hat_insights_with_fedramp/install-for-rhel |
B.3. Create a Private/Public Key Pair with Keytool | B.3. Create a Private/Public Key Pair with Keytool Procedure B.1. Create a Private/Public Key Pair with Keytool Run the keytool -genkey -alias ALIAS -keyalg ALGORITHM -validity DAYS -keystore server.keystore -storetype TYPE command: If the specified keystore already exists, enter the existing password for that keystore, otherwise enter a new password: Answer the following questions when prompted: Enter yes to confirm the provided information is correct: Enter your desired keystore password: Result The server.keystore file contains the newly generated public and private key pair. | [
"keytool -genkey -alias teiid -keyalg RSA -validity 365 -keystore server.keystore -storetype JKS",
"Enter keystore password: <password>",
"What is your first and last name? [Unknown]: <userA¢A\\u0080A\\u0099s name> What is the name of your organizational unit? [Unknown]: <department name> What is the name of your organization? [Unknown]: <company name> What is the name of your City or Locality? [Unknown]: <city name> What is the name of your State or Province? [Unknown]: <state name> What is the two-letter country code for this unit? [Unknown]: <country name>",
"Is CN=<userA¢A\\u0080A\\u0099s name>, OU=<department name>, O=\"<company name>\", L=<city name>, ST=<state name>, C=<country name> correct? [no]: yes",
"Enter key password for <server> (Return if same as keystore password)"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/create_a_privatepublic_key_pair_with_keytool1 |
40.5.2. Using opreport on a Single Executable | 40.5.2. Using opreport on a Single Executable To retrieve more detailed profiled information about a specific executable, use opreport : <executable> must be the full path to the executable to be analyzed. <mode> must be one of the following: -l List sample data by symbols. For example, the following is part of the output from running the command opreport -l /lib/tls/libc- <version> .so : The first column is the number of samples for the symbol, the second column is the percentage of samples for this symbol relative to the overall samples for the executable, and the third column is the symbol name. To sort the output from the largest number of samples to the smallest (reverse order), use -r in conjunction with the -l option. -i <symbol-name> List sample data specific to a symbol name. For example, the following output is from the command opreport -l -i __gconv_transform_utf8_internal /lib/tls/libc- <version> .so : The first line is a summary for the symbol/executable combination. The first column is the number of samples for the memory symbol. The second column is the percentage of samples for the memory address relative to the total number of samples for the symbol. The third column is the symbol name. -d List sample data by symbols with more detail than -l . For example, the following output is from the command opreport -l -d __gconv_transform_utf8_internal /lib/tls/libc- <version> .so : The data is the same as the -l option except that for each symbol, each virtual memory address used is shown. For each virtual memory address, the number of samples and percentage of samples relative to the number of samples for the symbol is displayed. -x <symbol-name> Exclude the comma-separated list of symbols from the output. session : <name> Specify the full path to the session or a directory relative to the /var/lib/oprofile/samples/ directory. | [
"opreport <mode> <executable>",
"samples % symbol name 12 21.4286 __gconv_transform_utf8_internal 5 8.9286 _int_malloc 4 7.1429 malloc 3 5.3571 __i686.get_pc_thunk.bx 3 5.3571 _dl_mcount_wrapper_check 3 5.3571 mbrtowc 3 5.3571 memcpy 2 3.5714 _int_realloc 2 3.5714 _nl_intern_locale_data 2 3.5714 free 2 3.5714 strcmp 1 1.7857 __ctype_get_mb_cur_max 1 1.7857 __unregister_atfork 1 1.7857 __write_nocancel 1 1.7857 _dl_addr 1 1.7857 _int_free 1 1.7857 _itoa_word 1 1.7857 calc_eclosure_iter 1 1.7857 fopen@@GLIBC_2.1 1 1.7857 getpid 1 1.7857 memmove 1 1.7857 msort_with_tmp 1 1.7857 strcpy 1 1.7857 strlen 1 1.7857 vfprintf 1 1.7857 write",
"samples % symbol name 12 100.000 __gconv_transform_utf8_internal",
"vma samples % symbol name 00a98640 12 100.000 __gconv_transform_utf8_internal 00a98640 1 8.3333 00a9868c 2 16.6667 00a9869a 1 8.3333 00a986c1 1 8.3333 00a98720 1 8.3333 00a98749 1 8.3333 00a98753 1 8.3333 00a98789 1 8.3333 00a98864 1 8.3333 00a98869 1 8.3333 00a98b08 1 8.3333"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/analyzing_the_data-using_opreport_on_a_single_executable |
Chapter 1. Red Hat Advanced Cluster Security for Kubernetes 4.5 Documentation | Chapter 1. Red Hat Advanced Cluster Security for Kubernetes 4.5 Documentation Welcome to the official Red Hat Advanced Cluster Security for Kubernetes documentation, where you can learn about Red Hat Advanced Cluster Security for Kubernetes and start exploring its features. To go to the Red Hat Advanced Cluster Security for Kubernetes documentation, you can use one of the following methods: Use the left navigation bar to browse the documentation. Select the task that interests you from the contents of this Welcome page. 1.1. Installation activities Understanding installation methods for different platforms : Determine the best installation method for your product and platform. 1.2. Operating Red Hat Advanced Cluster Security for Kubernetes Explore various activities you can perform by using Red Hat Advanced Cluster Security for Kubernetes: Viewing the dashboard : Find information about the Red Hat Advanced Cluster Security for Kubernetes real-time interactive dashboard. Learn how to use it to view key metrics from all your hosts, containers, and services. Compliance feature overview : Understand how to run automated checks and validate compliance based on industry standards, including CIS, NIST, PCI, and HIPAA. Managing vulnerabilities : Learn how to identify and prioritize vulnerabilities for remediation. Responding to violations : Learn how to view policy violations, drill down to the actual cause of the violation, and take corrective actions. 1.3. Configuring Red Hat Advanced Cluster Security for Kubernetes Explore the following typical configuration tasks in Red Hat Advanced Cluster Security for Kubernetes: Adding custom certificates : Learn how to use a custom TLS certificate with Red Hat Advanced Cluster Security for Kubernetes. After you set up a certificate, users and API clients do not have to bypass the certificate security warnings. Backing up Red Hat Advanced Cluster Security for Kubernetes : Learn how to perform manual and automated data backups for Red Hat Advanced Cluster Security for Kubernetes and use these backups for data restoration in the case of an infrastructure disaster or corrupt data. Configuring automatic upgrades for secured clusters : Stay up to date by automating the upgrade process for each secured cluster. 1.4. Integrating with other products Learn how to integrate Red Hat Advanced Cluster Security for Kubernetes with the following products: Integrating with PagerDuty : Learn how to integrate with PagerDuty and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to PagerDuty. Integrating with Slack : Learn how to integrate with Slack and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Slack. Integrating with Sumo Logic : Learn how to integrate with Sumo Logic and forward alerts from Red Hat Advanced Cluster Security for Kubernetes to Sumo Logic. Integrating by using the syslog protocol : Learn how to integrate with a security information and event management (SIEM) system or a syslog collector for data retention and security investigations. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/about/welcome-index |
Chapter 10. Support policies | Chapter 10. Support policies Supported for certain RHEL releases: RHEL for SAP Solutions follows the general RHEL product lifecycle and related policies . Important SAP defines its own release strategy regarding the support of operating systems and operating system versions. For SAP NetWeaver-based solutions, refer to the SAP Product Availability Matrix . For SAP HANA, see SAP Note 2235581 . For general information, see SAP Note 2369910 . Production environments must comply with Red Hat and SAP support conditions. Additional SAP certifications may apply. Intel Optane DC Persistent Memory File System DAX support: Red Hat fully supports Intel Optane DC Persistent Memory (pMEM) File System DAX (FS-DAX) as part of RHEL for SAP Solutions for production deployments of SAP HANA 2.0 SPS 04, revision 40 (or later). For more information, see Red Hat fully supports persistent memory (pMEM) FS-DAX mode in RHEL 7.6 and later versions for SAP Solutions . Support for RHEL HA clusters, as part of RHEL for SAP Solutions: The RHEL for SAP Solutions subscription includes the Red Hat Enterprise Linux (RHEL) High Availability Add-on. Users of RHEL High Availability clusters should adhere to general Support Policies for RHEL High Availability Clusters in order to be eligible for support. In addition, RHEL for SAP Solutions provides resource agents, scripts and documentation for integration with & support of the following SAP applications and scenarios: Red Hat HA Solutions for SAP HANA, S/4HANA and NetWeaver based SAP Applications . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/support_policies_8.x_release_notes |
2.15. The Virtual Database | 2.15. The Virtual Database The critical artifact that Teiid Designer is intended to manage is the VDB, or Virtual DataBase. Through the JBoss Data Virtualization server, VDB's behave like standard JDBC database schema which can be connected to, queried and updated based on how the VDB is configured. Since VDB's are just databases once they are deployed, they can be used as sources to other view model transformations. This allows creating and deploying re-usable or common VDB's in multiple layers depending on your business needs. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/the_virtual_database |
Chapter 13. Volume cloning | Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) Clone PVC . Click on the PVC that you want to clone and click Actions Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/volume-cloning_osp |
20.10. Resource Partitioning | 20.10. Resource Partitioning Hypervisors may allow for virtual machines to be placed into resource partitions, potentially with nesting of said partitions. The <resource> element groups together configuration related to resource partitioning. It currently supports a child element partition whose content defines the path of the resource partition in which to place the domain. If no partition is listed, then the domain will be placed in a default partition. It is the responsibility of the app/admin to ensure that the partition exists prior to starting the guest virtual machine. Only the (hypervisor specific) default partition can be assumed to exist by default. <resource> <partition>/virtualmachines/production</partition> </resource> Figure 20.12. Resource partitioning Resource partitions are currently supported by the QEMU and LXC drivers, which map partition paths to cgroups directories in all mounted controllers. | [
"<resource> <partition>/virtualmachines/production</partition> </resource>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-libvirt-dom-xml-res-part |
Chapter 81. ExternalConfiguration schema reference | Chapter 81. ExternalConfiguration schema reference Used in: KafkaConnectSpec , KafkaMirrorMaker2Spec Full list of ExternalConfiguration schema properties Configures external storage properties that define configuration options for Kafka Connect connectors. You can mount ConfigMaps or Secrets into a Kafka Connect pod as environment variables or volumes. Volumes and environment variables are configured in the externalConfiguration property in KafkaConnect.spec or KafkaMirrorMaker2.spec . When applied, the environment variables and volumes are available for use when developing your connectors. For more information, see Loading configuration values from external sources . 81.1. ExternalConfiguration schema properties Property Description env Makes data from a Secret or ConfigMap available in the Kafka Connect pods as environment variables. ExternalConfigurationEnv array volumes Makes data from a Secret or ConfigMap available in the Kafka Connect pods as volumes. ExternalConfigurationVolumeSource array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-externalconfiguration-reference |
Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation | Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. | [
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_rhodf |
Architecture | Architecture OpenShift Container Platform 4.18 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"openshift-install create ignition-configs --dir USDHOME/testconfig",
"cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },",
"echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode",
"This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service",
"\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",",
"USD oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m",
"oc describe machineconfigs 01-worker-container-runtime | grep Path:",
"Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf",
"apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None",
"apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown",
"oc new-project my-webhook-namespace 1",
"apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server",
"oc auth reconcile -f rbac.yaml",
"apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert",
"oc apply -f webhook-daemonset.yaml",
"apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2",
"oc apply -f webhook-secret.yaml",
"apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2",
"oc apply -f webhook-service.yaml",
"apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7",
"oc apply -f webhook-crd.yaml",
"apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1",
"oc apply -f webhook-api-service.yaml",
"apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail",
"oc apply -f webhook-config.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/architecture/index |
Chapter 9. Repository Notifications | Chapter 9. Repository Notifications Quay.io supports adding notifications to a repository for various events that occur in the repository's lifecycle. 9.1. Creating notifications Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. Procedure Navigate to a repository on Quay.io. In the navigation pane, click Settings . In the Events and Notifications category, click Create Notification to add a new notification for a repository event. You are redirected to a Create repository notification page. On the Create repository notification page, select the drop-down menu to reveal a list of events. You can select a notification for the following types of events: Push to Repository Dockerfile Build Queued Dockerfile Build Started Dockerfile Build Successfully Completed Docker Build Cancelled Package Vulnerability Found After you have selected the event type, select the notification method. The following methods are supported: Quay Notification E-mail Webhook POST Flowdock Team Notification HipChat Room Notification Slack Room Notification Depending on the method that you choose, you must include additional information. For example, if you select E-mail , you are required to include an e-mail address and an optional notification title. After selecting an event and notification method, click Create Notification . 9.2. Repository events description The following sections detail repository events. 9.2.1. Repository Push A successful push of one or more images was made to the repository: 9.2.2. Dockerfile Build Queued The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. 9.2.3. Dockerfile Build started The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. 9.2.4. Dockerfile Build successfully completed The following example is a response from a Dockerfile Build that has been successfully completed by the Build system. Note This event occurs simultaneously with a Repository Push event for the built image or images. 9.2.5. Dockerfile Build failed The following example is a response from a Dockerfile Build that has failed. 9.2.6. Dockerfile Build cancelled The following example is a response from a Dockerfile Build that has been cancelled. 9.2.7. Vulnerability detected The following example is a response from a Dockerfile Build has detected a vulnerability in the repository. 9.3. Notification actions 9.3.1. Notifications added Notifications are added to the Events and Notifications section of the Repository Settings page. They are also added to the Notifications window, which can be found by clicking the bell icon in the navigation pane of Quay.io. Quay.io notifications can be setup to be sent to a User , Team , or the organization as a whole. 9.3.2. E-mail notifications E-mails are sent to specified addresses that describe the specified event. E-mail addresses must be verified on a per-repository basis. 9.3.3. Webhook POST notifications An HTTP POST call is made to the specified URL with the event's data. For more information about event data, see "Repository events description". When the URL is HTTPS, the call has an SSL client certificate set from Quay.io. Verification of this certificate proves that the call originated from Quay.io. Responses with the status code in the 2xx range are considered successful. Responses with any other status code are considered failures and result in a retry of the webhook notification. 9.3.4. Flowdock notifications Posts a message to Flowdock. 9.3.5. Hipchat notifications Posts a message to HipChat. 9.3.6. Slack notifications Posts a message to Slack. | [
"{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }",
"{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }",
"{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }",
"{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }",
"{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }",
"{ \"repository\": \"dgangaia/repository\", \"namespace\": \"dgangaia\", \"name\": \"repository\", \"docker_url\": \"quay.io/dgangaia/repository\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"tags\": [\"latest\", \"othertag\"], \"vulnerability\": { \"id\": \"CVE-1234-5678\", \"description\": \"This is a bad vulnerability\", \"link\": \"http://url/to/vuln/info\", \"priority\": \"Critical\", \"has_fix\": true } }"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/repository-notifications |
B. Package Updates | B. Package Updates Important The Red Hat Enterprise Linux 6 Technical Notes compilations for Red Hat Enterprise Linux 6.0, 6.1 and 6.2 have been republished. Each compilation still lists all advisories comprising their respective GA release, including all Fastrack advisories. To more accurately represent the advisories released between minor updates of Red Hat Enterprise Linux, however, some advisories released asynchronously between minor releases have been relocated. Previously, these asynchronously released advisories were published in the Technical Notes for the most recent Red Hat Enterprise Linux minor upate. Asynchronous advisories released after the release of Red Enterprise Linux 6.1 and before the release of Red Hat Enterprise Linux 6.2 were published in the Red Hat Enterprise Linux 6.2 Technical Notes, for example. Most of these asynchronous advisories were concerned with, or even specific to, the then extant Red Hat Enterprise Linux release, however. With these republished Technical Notes, such advisories are now incorporated into the Technical Notes for the Red Hat Enterprise Linux release they are associated with. Future Red Hat Enterprise Linux Technical Notes will follow this pattern. On first publication a Red Hat Enterprise Linux X.y Technical Notes compilation will include the advisories comprising that release along with the Fastrack advisories for the release. Upon the GA of the succeeding Red Hat Enterprise Linux release, the Red Hat Enterprise Linux X.y Technical Notes compilation will be republished to include associated asynchronous advisories released since Red Hat Enterprise Linux X.y GA up until the GA of the successive release. B.1. apr B.1.1. RHSA-2011:0507 - Moderate: apr security update Updated apr packages that fix one security issue are now available for Red Hat Enterprise Linux 4, 5, and 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Apache Portable Runtime (APR) is a portability library used by the Apache HTTP Server and other projects. It provides a free library of C data structures and routines. CVE-2011-0419 It was discovered that the apr_fnmatch() function used an unconstrained recursion when processing patterns with the '*' wildcard. An attacker could use this flaw to cause an application using this function, which also accepted untrusted input as a pattern for matching (such as an httpd server using the mod_autoindex module), to exhaust all stack memory or use an excessive amount of CPU time when performing matching. Red Hat would like to thank Maksymilian Arciemowicz for reporting this issue. All apr users should upgrade to these updated packages, which contain a backported patch to correct this issue. Applications using the apr library, such as httpd, must be restarted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/appendix |
8.5. Configuring HA Services | 8.5. Configuring HA Services Configuring HA (High Availability) services consists of configuring resources and assigning them to services. The following sections describe how to edit /etc/cluster/cluster.conf to add resources and services. Section 8.5.1, "Adding Cluster Resources" Section 8.5.2, "Adding a Cluster Service to the Cluster" Important There can be a wide range of configurations possible with High Availability resources and services. For a better understanding about resource parameters and resource behavior, see Appendix B, HA Resource Parameters and Appendix C, HA Resource Behavior . For optimal performance and to ensure that your configuration can be supported, contact an authorized Red Hat support representative. 8.5.1. Adding Cluster Resources You can configure two types of resources: Global - Resources that are available to any service in the cluster. These are configured in the resources section of the configuration file (within the rm element). Service-specific - Resources that are available to only one service. These are configured in each service section of the configuration file (within the rm element). This section describes how to add a global resource. For procedures about configuring service-specific resources, refer to Section 8.5.2, "Adding a Cluster Service to the Cluster" . To add a global cluster resource, follow the steps in this section. Open /etc/cluster/cluster.conf at any node in the cluster. Add a resources section within the rm element. For example: Populate it with resources according to the services you want to create. For example, here are resources that are to be used in an Apache service. They consist of a file system ( fs ) resource, an IP ( ip ) resource, and an Apache ( apache ) resource. Example 8.9, " cluster.conf File with Resources Added " shows an example of a cluster.conf file with the resources section added. Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3" ). Save /etc/cluster/cluster.conf . (Optional) Validate the file against the cluster schema ( cluster.rng ) by running the ccs_config_validate command. For example: Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. Verify that the updated configuration file has been propagated. Proceed to Section 8.5.2, "Adding a Cluster Service to the Cluster" . Example 8.9. cluster.conf File with Resources Added | [
"<rm> <resources> </resources> </rm>",
"<rm> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> </rm>",
"ccs_config_validate Configuration validates",
"<cluster name=\"mycluster\" config_version=\"3\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"apc\" passwd=\"password_example\"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name=\"example_pri\" nofailback=\"0\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"node-01.example.com\" priority=\"1\"/> <failoverdomainnode name=\"node-02.example.com\" priority=\"2\"/> <failoverdomainnode name=\"node-03.example.com\" priority=\"3\"/> </failoverdomain> </failoverdomains> <resources> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </resources> </rm> </cluster>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-ha-svc-cli-ca |
5.9. Devices | 5.9. Devices kernel component A Linux LIO FCoE target causes the bnx2fc driver to perform sequence level error recovery when the target is down. As a consequence, the FCoE session cannot be resumed after the Ethernet link is bounced, the bnx2fc kernel module cannot be unloaded and the FCoE session cannot be removed when running the fcoeadm -d eth0 command. To avoid these problems, do not use the bnx2fc driver with a Linux FCoE target. kernel component When using large block size (1MB), the tape driver sometimes returns an EBUSY error. To work around this problem, use a smaller block size, that is 256KB. kernel component On some of the older Broadcom tg3 devices, the default Maximum Read Request Size (MRRS) value of 512 byte is known to cause lower performance. It is because these devices perform direct memory access (DMA) requests serially. 1500-byte ethernet packet will be broken into 3 PCIE read requests using 512 byte MRRS. When using a higher MRRS value, the DMA transfer can be faster as fewer requests will be needed. However, the MRRS value is meant to be tuned by system software and not by the driver. PCIE Base spec 3.0 section 7.8.4 contains an implementation note that illustrates how system software might tune the MRRS for all devices in the system. As a result, Broadcom modified the tg3 driver to remove the code that sets the MRRS to 4K bytes so that any value selected by system software (BIOS) will be preserved. kernel component The Brocade BFA Fibre Channel and FCoE driver does not currently support dynamic recognition of Logical Unit addition or removal using the sg3_utils utilities (for example, the sg_scan command) or similar functionality. Please consult Brocade directly for a Brocade equivalent of this functionality. kernel component iSCSI and FCoE boot support on Broadcom devices is not included in Red Hat Enterprise Linux 6.4. These two features, which are provided by the bnx2i and bnx2fc Broadcom drivers, remain a Technology Preview until further notice. kexec-tools component Starting with Red Hat Enterprise Linux 6.0 and later, kexec kdump supports dumping core to the Brtfs file system. However, note that because the findfs utility in busybox does not support Btrfs yet, UUID/LABEL resolving is not functional. Avoid using the UUID/LABEL syntax when dumping core to Btrfs file systems. trace-cmd component The trace-cmd service does start on 64-bit PowerPC and IBM System z systems because the sys_enter and sys_exit events do not get enabled on the aforementioned systems. trace-cmd component trace-cmd 's subcommand, report , does not work on IBM System z systems. This is due to the fact that the CONFIG_FTRACE_SYSCALLS parameter is not set on IBM System z systems. libfprint component Red Hat Enterprise Linux 6 only has support for the first revision of the UPEK Touchstrip fingerprint reader (USB ID 147e:2016). Attempting to use a second revision device may cause the fingerprint reader daemon to crash. The following command returns the version of the device being used in an individual machine: kernel component The Emulex Fibre Channel/Fibre Channel-over-Ethernet (FCoE) driver in Red Hat Enterprise Linux 6 does not support DH-CHAP authentication. DH-CHAP authentication provides secure access between hosts and mass storage in Fibre-Channel and FCoE SANs in compliance with the FC-SP specification. Note, however that the Emulex driver ( lpfc ) does support DH-CHAP authentication on Red Hat Enterprise Linux 5, from version 5.4. Future Red Hat Enterprise Linux 6 releases may include DH-CHAP authentication. kernel component The recommended minimum HBA firmware revision for use with the mpt2sas driver is "Phase 5 firmware" (that is, with version number in the form 05.xx.xx.xx ). Note that following this recommendation is especially important on complex SAS configurations involving multiple SAS expanders. | [
"~]USD lsusb -v -d 147e:2016 | grep bcdDevice"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/devices_issues |
Chapter 1. Introduction | Chapter 1. Introduction 1.1. What is RHEL for SAP Solutions RHEL for SAP Solutions is a Red Hat subscription that consists of Red Hat Enterprise Linux and additional software repositories and services specifically designed for running SAP HANA and/or SAP ABAP Platform, including SAP S/4HANA, on Red Hat Enterprise Linux. For more information, refer to Overview of Red Hat Enterprise Linux for SAP Solutions Subscription . RHEL for SAP Solutions consists of the following repositories: 1.2. Overview of the installation steps Installing Red Hat Enterprise Linux (RHEL) 9 for SAP Solutions consists of the following steps: Install RHEL 9 using one of the standard Red Hat Enterprise Linux 9 installation ISO images. Install additional packages needed for running SAP software, either from another repository source, for example, a Red Hat Satellite system, or from the RHEL 9 for SAP Solutions image, which contains only a set of additional packages needed for SAP. Note There are also ISO images named Red Hat Enterprise Linux for SAP Solutions, but those only contain the additional software packages required to be installed on top of Red Hat Enterprise Linux. It is not possible to install RHEL for SAP Solutions from these ISO images. The recommended and easiest way is to install Red Hat Enterprise Linux 9, attach it to a repository source, and then use the RHEL System Roles for SAP for: installing additional software packages and configuring the system according to the requirements of the SAP software. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/installing_rhel_9_for_sap_solutions/con_overview_configuring-rhel-9-for-sap-hana2-installation |
Chapter 9. Using a VXLAN to create a virtual layer-2 domain for VMs | Chapter 9. Using a VXLAN to create a virtual layer-2 domain for VMs A virtual extensible LAN (VXLAN) is a networking protocol that tunnels layer-2 traffic over an IP network using the UDP protocol. For example, certain virtual machines (VMs), that are running on different hosts can communicate over a VXLAN tunnel. The hosts can be in different subnets or even in different data centers around the world. From the perspective of the VMs, other VMs in the same VXLAN are within the same layer-2 domain: In this example, RHEL-host-A and RHEL-host-B use a bridge, br0 , to connect the virtual network of a VM on each host with a VXLAN named vxlan10 . Due to this configuration, the VXLAN is invisible to the VMs, and the VMs do not require any special configuration. If you later connect more VMs to the same virtual network, the VMs are automatically members of the same virtual layer-2 domain. Important Just as normal layer-2 traffic, data in a VXLAN is not encrypted. For security reasons, use a VXLAN over a VPN or other types of encrypted connections. 9.1. Benefits of VXLANs A virtual extensible LAN (VXLAN) provides the following major benefits: VXLANs use a 24-bit ID. Therefore, you can create up to 16,777,216 isolated networks. For example, a virtual LAN (VLAN), supports only 4,096 isolated networks. VXLANs use the IP protocol. This enables you to route the traffic and virtually run systems in different networks and locations within the same layer-2 domain. Unlike most tunnel protocols, a VXLAN is not only a point-to-point network. A VXLAN can learn the IP addresses of the other endpoints either dynamically or use statically-configured forwarding entries. Certain network cards support UDP tunnel-related offload features. Additional resources /usr/share/doc/kernel-doc- <kernel_version> /Documentation/networking/vxlan.rst provided by the kernel-doc package 9.2. Configuring the Ethernet interface on the hosts To connect a RHEL VM host to the Ethernet, create a network connection profile, configure the IP settings, and activate the profile. Run this procedure on both RHEL hosts, and adjust the IP address configuration accordingly. Prerequisites The host is connected to the Ethernet. Procedure Add a new Ethernet connection profile to NetworkManager: Configure the IPv4 settings: Skip this step if the network uses DHCP. Activate the Example connection: Verification Display the status of the devices and connections: Ping a host in a remote network to verify the IP settings: Note that you cannot ping the other VM host before you have configured the network on that host as well. Additional resources nm-settings(5) man page on your system 9.3. Creating a network bridge with a VXLAN attached To make a virtual extensible LAN (VXLAN) invisible to virtual machines (VMs), create a bridge on a host, and attach the VXLAN to the bridge. Use NetworkManager to create both the bridge and the VXLAN. You do not add any traffic access point (TAP) devices of the VMs, typically named vnet* on the host, to the bridge. The libvirtd service adds them dynamically when the VMs start. Run this procedure on both RHEL hosts, and adjust the IP addresses accordingly. Procedure Create the bridge br0 : This command sets no IPv4 and IPv6 addresses on the bridge device, because this bridge works on layer 2. Create the VXLAN interface and attach it to br0 : This command uses the following settings: id 10 : Sets the VXLAN identifier. local 198.51.100.2 : Sets the source IP address of outgoing packets. remote 203.0.113.1 : Sets the unicast or multicast IP address to use in outgoing packets when the destination link layer address is not known in the VXLAN device forwarding database. master br0 : Sets this VXLAN connection to be created as a port in the br0 connection. ipv4.method disabled and ipv6.method disabled : Disables IPv4 and IPv6 on the bridge. By default, NetworkManager uses 8472 as the destination port. If the destination port is different, additionally, pass the destination-port <port_number> option to the command. Activate the br0 connection profile: Open port 8472 for incoming UDP connections in the local firewall: Verification Display the forwarding table: Additional resources nm-settings(5) man page on your system 9.4. Creating a virtual network in libvirt with an existing bridge To enable virtual machines (VM) to use the br0 bridge with the attached virtual extensible LAN (VXLAN), first add a virtual network to the libvirtd service that uses this bridge. Prerequisites You installed the libvirt package. You started and enabled the libvirtd service. You configured the br0 device with the VXLAN on RHEL. Procedure Create the ~/vxlan10-bridge.xml file with the following content: Use the ~/vxlan10-bridge.xml file to create a new virtual network in libvirt : Remove the ~/vxlan10-bridge.xml file: Start the vxlan10-bridge virtual network: Configure the vxlan10-bridge virtual network to start automatically when the libvirtd service starts: Verification Display the list of virtual networks: Additional resources virsh(1) man page on your system 9.5. Configuring virtual machines to use VXLAN To configure a VM to use a bridge device with an attached virtual extensible LAN (VXLAN) on the host, create a new VM that uses the vxlan10-bridge virtual network or update the settings of existing VMs to use this network. Perform this procedure on the RHEL hosts. Prerequisites You configured the vxlan10-bridge virtual network in libvirtd . Procedure To create a new VM and configure it to use the vxlan10-bridge network, pass the --network network: vxlan10-bridge option to the virt-install command when you create the VM: To change the network settings of an existing VM: Connect the VM's network interface to the vxlan10-bridge virtual network: Shut down the VM, and start it again: Verification Display the virtual network interfaces of the VM on the host: Display the interfaces attached to the vxlan10-bridge bridge: Note that the libvirtd service dynamically updates the bridge's configuration. When you start a VM which uses the vxlan10-bridge network, the corresponding vnet* device on the host appears as a port of the bridge. Use address resolution protocol (ARP) requests to verify whether VMs are in the same VXLAN: Start two or more VMs in the same VXLAN. Send an ARP request from one VM to the other one: If the command shows a reply, the VM is in the same layer-2 domain and, in this case in the same VXLAN. Install the iputils package to use the arping utility. Additional resources virt-install(1) and virt-xml(1) man pages on your system virsh(1) and arping(8) man pages on your system | [
"nmcli connection add con-name Example ifname enp1s0 type ethernet",
"nmcli connection modify Example ipv4.addresses 198.51.100.2/24 ipv4.method manual ipv4.gateway 198.51.100.254 ipv4.dns 198.51.100.200 ipv4.dns-search example.com",
"nmcli connection up Example",
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet connected Example",
"ping RHEL-host-B.example.com",
"nmcli connection add type bridge con-name br0 ifname br0 ipv4.method disabled ipv6.method disabled",
"nmcli connection add type vxlan slave-type bridge con-name br0-vxlan10 ifname vxlan10 id 10 local 198.51.100.2 remote 203.0.113.1 master br0",
"nmcli connection up br0",
"firewall-cmd --permanent --add-port=8472/udp firewall-cmd --reload",
"bridge fdb show dev vxlan10 2a:53:bd:d5:b3:0a master br0 permanent 00:00:00:00:00:00 dst 203.0.113.1 self permanent",
"<network> <name>vxlan10-bridge</name> <forward mode=\"bridge\" /> <bridge name=\"br0\" /> </network>",
"virsh net-define ~/vxlan10-bridge.xml",
"rm ~/vxlan10-bridge.xml",
"virsh net-start vxlan10-bridge",
"virsh net-autostart vxlan10-bridge",
"virsh net-list Name State Autostart Persistent ---------------------------------------------------- vxlan10-bridge active yes yes",
"virt-install ... --network network: vxlan10-bridge",
"virt-xml VM_name --edit --network network= vxlan10-bridge",
"virsh shutdown VM_name virsh start VM_name",
"virsh domiflist VM_name Interface Type Source Model MAC ------------------------------------------------------------------- vnet1 bridge vxlan10-bridge virtio 52:54:00:c5:98:1c",
"ip link show master vxlan10-bridge 18: vxlan10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 2a:53:bd:d5:b3:0a brd ff:ff:ff:ff:ff:ff 19: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 52:54:00:c5:98:1c brd ff:ff:ff:ff:ff:ff",
"arping -c 1 192.0.2.2 ARPING 192.0.2.2 from 192.0.2.1 enp1s0 Unicast reply from 192.0.2.2 [ 52:54:00:c5:98:1c ] 1.450ms Sent 1 probe(s) (0 broadcast(s)) Received 1 response(s) (0 request(s), 0 broadcast(s))"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/assembly_using-a-vxlan-to-create-a-virtual-layer-2-domain-for-vms_configuring-and-managing-networking |
20.4. Connecting to the Hypervisor with virsh Connect | 20.4. Connecting to the Hypervisor with virsh Connect The virsh connect [ hostname-or-URI ] [--readonly] command begins a local hypervisor session using virsh. After the first time you run this command it will run automatically each time the virsh shell runs. The hypervisor connection URI specifies how to connect to the hypervisor. The most commonly used URIs are: qemu:///system - connects locally as the root user to the daemon supervising guest virtual machines on the KVM hypervisor. qemu:///session - connects locally as a user to the user's set of guest local machines using the KVM hypervisor. lxc:/// - connects to a local Linux container. The command can be run as follows, with the target guest being specified either either by its machine name (hostname) or the URL of the hypervisor (the output of the virsh uri command), as shown: For example, to establish a session to connect to your set of guest virtual machines, with you as the local user: To initiate a read-only connection, append the above command with --readonly . For more information on URIs, see Remote URIs . If you are unsure of the URI, the virsh uri command will display it: | [
"virsh uri qemu:///session",
"virsh connect qemu:///session"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-generic_commands-connect |
21.6. Sample Parameter File and CMS Configuration File | 21.6. Sample Parameter File and CMS Configuration File To change the parameter file, begin by extending the shipped generic.prm file. Example of generic.prm file: Example of redhat.conf file configuring a QETH network device (pointed to by CMSCONFFILE in generic.prm ): | [
"ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" vnc inst.repo=http://example.com/path/to/repository",
"NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-parameter-configuration-files-samples-s390 |
Chapter 9. Authorization Modules | Chapter 9. Authorization Modules The following modules provide authorization services: Code Class DenyAll org.jboss.security.authorization.modules.AllDenyAuthorizationModule PermitAll org.jboss.security.authorization.modules.AllPermitAuthorizationModule Delegating org.jboss.security.authorization.modules.DelegatingAuthorizationModule Web org.jboss.security.authorization.modules.web.WebAuthorizationModule JACC org.jboss.security.authorization.modules.JACCAuthorizationModule XACML org.jboss.security.authorization.modules.XACMLAuthorizationModule AbstractAuthorizationModule This is the base authorization module which has to be overridden and provides a facility for delegating to other authorization modules. This base authorization module also provides a delegateMap property to the overriding class, which allows for delegation modules to be declared for specific components. This enables more specialized classes to handle the authorization for each layer, for example web , ejb , etc, since the information used to authorize a user may vary between the resources being accessed. For instance, an authorization module may be based on permissions, yet have different permission types for the web and ejb resources. By default, the authorization module would be forced to deal with all possible resource and permission types, but configuring the delegateMap option allows the module to delegate to specific classes for different resource types. The delegateMap option takes a comma-separated list of modules, each of which is prefixed by the component it relates to, for example <module-option name="delegateMap">web=xxx.yyy.MyWebDelegate,ejb=xxx.yyy.MyEJBDelegate</module-option> . Important When configuring the delegateMap option, every delegate must implement the authorize(Resource) method and have it call the invokeDelegate(Resource) method in same way the provided authorization modules do. Failure to do so will result in the delegate not getting called. AllDenyAuthorizationModule This is a simple authorization module that always denies an authorization request. No configuration options are available. AllPermitAuthorizationModule This is a simple authorization module that always permits an authorization request. No configuration options are available. DelegatingAuthorizationModule This is the default authorization module that delegates decision making to the configured delegates. This module also supports the delegateMap option. WebAuthorizationModule This is the default web authorization module with the default Tomcat authorization logic, permit all. JACCAuthorizationModule This module enforces Jakarta Authorization semantics using two delegates, WebJACCPolicyModuleDelegate for web container authorization requests and EJBJACCPolicyModuleDelegate for Jakarta Enterprise Beans container requests. This module also supports the delegateMap option. XACMLAuthorizationModule This module enforces XACML authorization using two delegates for web and Jakarta Enterprise Beans containers, WebXACMLPolicyModuleDelegate and EJBXACMLPolicyModuleDelegate . It creates a PDP object based on registered policies and evaluates web or Jakarta Enterprise Beans requests against it. This module also supports the delegateMap option. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/authorization_modules |
Chapter 17. Bridging brokers | Chapter 17. Bridging brokers Bridges provide a method to connect two brokers, forwarding messages from one to the other. The following bridges are available: Core An example is provided that demonstrates a core bridge deployed on one broker, which consumes messages from a local queue and forwards them to an address on a second broker. See the core-bridge example that is located in the <install_dir> /examples/features/standard/ directory of your broker installation. Mirror See Chapter 16, Configuring a multi-site, fault-tolerant messaging system using broker connections Sender and receiver See Section 17.1, "Sender and receiver configurations for broker connections" Peer See Section 17.2, "Peer configurations for broker connections" Note The broker.xml element for Core bridges is bridge . The other bridging techniques use the <broker-connection> element. 17.1. Sender and receiver configurations for broker connections It is possible to connect a broker to another broker by creating a sender or receiver broker connection element in the <broker-connections> section of broker.xml . For a sender , the broker creates a message consumer on a queue that sends messages to another broker. For a receiver , the broker creates a message producer on an address that receives messages from another broker. Both elements function as a message bridge. However, there is no additional overhead required to process messages. Senders and receivers behave just like any other consumer or producer in a broker. Specific queues can be configured by senders or receivers. Wildcard expressions can be used to match senders and receivers to specific addresses or sets of addresses. When configuring a sender or receiver, the following properties can be set: address-match : Match the sender or receiver to a specific address or set of addresses, using a wildcard expression. queue-name : Configure the sender or receiver for a specific queue. Using address expressions: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="other-server"> <sender address-match="queues.#"/> <!-- notice the local queues for remotequeues.# need to be created on this broker --> <receiver address-match="remotequeues.#"/> </amqp-connection> </broker-connections> <addresses> <address name="remotequeues.A"> <anycast> <queue name="remoteQueueA"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="localQueueB"/> </anycast> </address> </addresses> Using queue names: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="other-server"> <receiver queue-name="remoteQueueA"/> <sender queue-name="localQueueB"/> </amqp-connection> </broker-connections> <addresses> <address name="remotequeues.A"> <anycast> <queue name="remoteQueueA"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="localQueueB"/> </anycast> </address> </addresses> Note Receivers can only be matched to a local queue that already exists. Therefore, if receivers are being used, ensure that queues are pre-created locally. Otherwise, the broker cannot match the remote queues and addresses. Note Do not create a sender and a receiver with the same destination because this creates an infinite loop of sends and receives. 17.2. Peer configurations for broker connections The broker can be configured as a peer which connects to a AMQ Interconnect instance and instructs it that the broker will act as a store-and-forward queue for a given AMQP waypoint address configured on that router. In this scenario, clients connect to a router to send and receive messages using a waypoint address, and the router routes these messages to or from the queue on the broker. This peer configuration creates a sender and receiver pair for each destination matched in the broker connections configuration on the broker. These pairs include configurations that enable the router to collaborate with the broker. This feature avoids the requirement for the router to initiate a connection and create auto-links. For more information about possible router configurations, see Using the AMQ Interconnect router . With a peer configuration, the same properties are present as when there are senders and receivers. For example, a configuration where queues with names beginning queue . act as storage for the matching router waypoint address would be: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="router"> <peer address-match="queues.#"/> </amqp-connection> </broker-connections> <addresses> <address name="queues.A"> <anycast> <queue name="queues.A"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="queues.B"/> </anycast> </address> </addresses> There must be a matching address waypoint configuration on the router. This instructs it to treat the particular router addresses the broker attaches to as waypoints. For example, see the following prefix-based router address configuration: For more information on this option, see Using the AMQ Interconnect router . Note Do not use the peer option to connect directly to another broker. If you use this option to connect to another broker, all messages become immediately ready to consume, creating an infinite echo of sends and receives. | [
"<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"other-server\"> <sender address-match=\"queues.#\"/> <!-- notice the local queues for remotequeues.# need to be created on this broker --> <receiver address-match=\"remotequeues.#\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"remotequeues.A\"> <anycast> <queue name=\"remoteQueueA\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"localQueueB\"/> </anycast> </address> </addresses>",
"<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"other-server\"> <receiver queue-name=\"remoteQueueA\"/> <sender queue-name=\"localQueueB\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"remotequeues.A\"> <anycast> <queue name=\"remoteQueueA\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"localQueueB\"/> </anycast> </address> </addresses>",
"<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"router\"> <peer address-match=\"queues.#\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"queues.A\"> <anycast> <queue name=\"queues.A\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"queues.B\"/> </anycast> </address> </addresses>",
"address { prefix: queue waypoint: yes }"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/bridging-brokers-configuring |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/getting_started_with_security/making-open-source-more-inclusive |
Schedule and quota APIs | Schedule and quota APIs OpenShift Container Platform 4.14 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/schedule_and_quota_apis/index |
function::proc_mem_size | function::proc_mem_size Name function::proc_mem_size - Total program virtual memory size in pages Synopsis Arguments None Description Returns the total virtual memory size in pages of the current process, or zero when there is no current process or the number of pages couldn't be retrieved. | [
"proc_mem_size:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-proc-mem-size |
Chapter 6. Installer-provisioned postinstallation configuration | Chapter 6. Installer-provisioned postinstallation configuration After successfully deploying an installer-provisioned cluster, consider the following postinstallation procedures. 6.1. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. Use the following procedure to configure NTP servers on the control plane nodes and configure worker nodes as NTP clients of the control plane nodes after a successful deployment. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml Apply the 99-master-chrony-conf-override.yaml policy to the control plane nodes. USD oc apply -f 99-master-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created Apply the 99-worker-chrony-conf-override.yaml policy to the worker nodes. USD oc apply -f 99-worker-chrony-conf-override.yaml Example output machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created Check the status of the applied NTP settings. USD oc describe machineconfigpool 6.2. Enabling a provisioning network after installation The assisted installer and installer-provisioned installation for bare metal clusters provide the ability to deploy a cluster without a provisioning network. This capability is for scenarios such as proof-of-concept clusters or deploying exclusively with Redfish virtual media when each node's baseboard management controller is routable via the baremetal network. You can enable a provisioning network after installation using the Cluster Baremetal Operator (CBO). Prerequisites A dedicated physical network must exist, connected to all worker and control plane nodes. You must isolate the native, untagged physical network. The network cannot have a DHCP server when the provisioningNetwork configuration setting is set to Managed . You can omit the provisioningInterface setting in OpenShift Container Platform 4.10 to use the bootMACAddress configuration setting. Procedure When setting the provisioningInterface setting, first identify the provisioning interface name for the cluster nodes. For example, eth0 or eno1 . Enable the Preboot eXecution Environment (PXE) on the provisioning network interface of the cluster nodes. Retrieve the current state of the provisioning network and save it to a provisioning custom resource (CR) file: USD oc get provisioning -o yaml > enable-provisioning-nw.yaml Modify the provisioning CR file: USD vim ~/enable-provisioning-nw.yaml Scroll down to the provisioningNetwork configuration setting and change it from Disabled to Managed . Then, add the provisioningIP , provisioningNetworkCIDR , provisioningDHCPRange , provisioningInterface , and watchAllNameSpaces configuration settings after the provisioningNetwork setting. Provide appropriate values for each setting. apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6 1 The provisioningNetwork is one of Managed , Unmanaged , or Disabled . When set to Managed , Metal3 manages the provisioning network and the CBO deploys the Metal3 pod with a configured DHCP server. When set to Unmanaged , the system administrator configures the DHCP server manually. 2 The provisioningIP is the static IP address that the DHCP server and ironic use to provision the network. This static IP address must be within the provisioning subnet, and outside of the DHCP range. If you configure this setting, it must have a valid IP address even if the provisioning network is Disabled . The static IP address is bound to the metal3 pod. If the metal3 pod fails and moves to another server, the static IP address also moves to the new server. 3 The Classless Inter-Domain Routing (CIDR) address. If you configure this setting, it must have a valid CIDR address even if the provisioning network is Disabled . For example: 192.168.0.1/24 . 4 The DHCP range. This setting is only applicable to a Managed provisioning network. Omit this configuration setting if the provisioning network is Disabled . For example: 192.168.0.64, 192.168.0.253 . 5 The NIC name for the provisioning interface on cluster nodes. The provisioningInterface setting is only applicable to Managed and Unmanaged provisioning networks. Omit the provisioningInterface configuration setting if the provisioning network is Disabled . Omit the provisioningInterface configuration setting to use the bootMACAddress configuration setting instead. 6 Set this setting to true if you want metal3 to watch namespaces other than the default openshift-machine-api namespace. The default value is false . Save the changes to the provisioning CR file. Apply the provisioning CR file to the cluster: USD oc apply -f enable-provisioning-nw.yaml 6.3. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 6.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 6.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 6.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 6.3.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private | [
"sudo dnf -y install butane",
"variant: openshift version: 4.14.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.14.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"oc apply -f 99-master-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-master-chrony-conf-override created",
"oc apply -f 99-worker-chrony-conf-override.yaml",
"machineconfig.machineconfiguration.openshift.io/99-worker-chrony-conf-override created",
"oc describe machineconfigpool",
"oc get provisioning -o yaml > enable-provisioning-nw.yaml",
"vim ~/enable-provisioning-nw.yaml",
"apiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: 1 provisioningIP: 2 provisioningNetworkCIDR: 3 provisioningDHCPRange: 4 provisioningInterface: 5 watchAllNameSpaces: 6",
"oc apply -f enable-provisioning-nw.yaml",
"Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10",
"Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10",
"# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache",
"curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End",
"curl https://api.<cluster_name>.<base_domain>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure",
"HTTP/1.1 200 OK Content-Length: 0",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private",
"curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-post-installation-configuration |
Chapter 6. Configuring the systems and running tests using Cockpit | Chapter 6. Configuring the systems and running tests using Cockpit To complete the certification process, you must configure cockpit, prepare the host under test (HUT) and test server, run the tests, and retrieve the test results. 6.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit on a new system, which is separate from the host under test and test server. Ensure that the Cockpit has access to both the host under test and the test server. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. 6.2. Adding the host under test and the test server to Cockpit Adding the host under test (HUT) and test server to Cockpit lets the two systems communicate by using passwordless SSH. Repeat this procedure for adding both the systems one by one. Prerequisites You have the IP address or hostname of the HUT and the test server. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the system through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification and verify that the system you just added displays under the Hosts section on the right. 6.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 6.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 6.5. Using the test plan to prepare the host under test for testing Provisioning the host under test performs a number of operations, such as setting up passwordless SSH communication with the cockpit, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages are installed if the test plan is designed for certifying a hardware product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Host under test and click Submit . By default, the file is uploaded to path, /var/rhcert/plans/<testplanfile.xml> . 6.6. Using the test plan to prepare the test server for testing Running the Provision Host command enables and starts the rhcertd service, which configures services specified in the test suite on the test server, such as iperf for network testing, and an nfs mount point used in kdump testing. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools -> Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Test server and click Submit . By default, the file is uploaded to the /var/rhcert/plans/<testplanfile.xml> path. 6.7. Running the certification tests using Cockpit Prerequisites You have prepared the host under test . You have prepared the test server . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 6.8. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . 6.9. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 6.10. Uploading the test results file to Red Hat Certification Tool Use the Red Hat Certification Tool to submit the test results file of the executed test plan to the Red Hat Certification team. Prerequisites You have downloaded the test results file from Cockpit or HUT. Procedure Log in to Red Hat Certification Tool . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat reviews the results file you submitted and suggest the steps. For more information, visit Red Hat Certification Tool . | [
"yum install redhat-certification-cockpit"
] | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly_configuring-the-hosts-and-running-tests-by-using-Cockpit_hw-test-suite-setting-test-environment |
Chapter 5. Installing zone aware sample application | Chapter 5. Installing zone aware sample application Deploy a zone aware sample application to validate whether an OpenShift Data Foundation, Metro-DR setup is configured correctly. Important With latency between the data zones, one can expect to see performance degradation compared to an OpenShift cluster with low latency between nodes and zones (for example, all nodes in the same location). How much will the performance get degraded, depends on the latency between the zones and on the application behavior using the storage (such as heavy write traffic). Ensure that you test the critical applications with Metro-DR cluster configuration to ensure sufficient application performance for the required service levels. 5.1. Install Zone Aware Sample Application A ReadWriteMany (RWX) Persistent Volume Claim (PVC) is created using the ocs-storagecluster-cephfs storage class. Multiple pods use the newly created RWX PVC at the same time. The application used is called File Uploader. Demonstration on how an application is spread across topology zones so that it is still available in the event of a site outage: Note This demonstration is possible since this application shares the same RWX volume for storing files. It works for persistent data access as well because Red Hat OpenShift Data Foundation is configured as a Metro-DR stretched cluster with zone awareness and high availability. Create a new project. Deploy the example PHP application called file-uploader. Example Output: View the build log and wait until the application is deployed. Example Output: The command prompt returns out of the tail mode once you see Push successful . Note The new-app command deploys the application directly from the git repository and does not use the OpenShift template, hence OpenShift route resource is not created by default. You need to create the route manually. Scaling the application Scale the application to four replicas and expose it's services to make the application zone aware and available. You should have four file-uploader pods in a few minutes. Repeat the above command until there are 4 file-uploader pods in the Running status. Create a PVC and attach it into an application. This command: Creates a PVC. Updates the application deployment to include a volume definition. Updates the application deployment to attach a volume mount into the specified mount-path. Creates a new deployment with the four application pods. Check the result of adding the volume. Example Output: Notice the ACCESS MODE is set to RWX. All the four file-uploader pods are using the same RWX volume. Without this access mode, OpenShift does not attempt to attach multiple pods to the same Persistent Volume (PV) reliably. If you attempt to scale up the deployments that are using ReadWriteOnce (RWO) PV, the pods may get colocated on the same node. 5.2. Modify Deployment to be Zone Aware Currently, the file-uploader Deployment is not zone aware and can schedule all the pods in the same zone. In this case, if there is a site outage then the application is unavailable. For more information, see Controlling pod placement by using pod topology spread constraints . Add the pod placement rule in the application deployment configuration to make the application zone aware. Run the following command, and review the output: Example Output: Edit the deployment to use the topology zone labels. Add add the following new lines between the Start and End (shown in the output in the step): Example output: Scale down the deployment to zero pods and then back to four pods. This is needed because the deployment changed in terms of pod placement. Scaling down to zero pods Example output: Scaling up to four pods Example output: Verify that the four pods are spread across the four nodes in datacenter1 and datacenter2 zones. Example output: Search for the zone labels used. Example output: Use the file-uploader web application using your browser to upload new files. Find the route that is created. Example Output: Point your browser to the web application using the route in the step. The web application lists all the uploaded files and offers the ability to upload new ones as well as you download the existing data. Right now, there is nothing. Select an arbitrary file from your local machine and upload it to the application. Click Choose file to select an arbitrary file. Click Upload . Figure 5.1. A simple PHP-based file upload tool Click List uploaded files to see the list of all currently uploaded files. Note The OpenShift Container Platform image registry, ingress routing, and monitoring services are not zone aware. | [
"oc new-project my-shared-storage",
"oc new-app openshift/php:7.3-ubi8~https://github.com/christianh814/openshift-php-upload-demo --name=file-uploader",
"Found image 4f2dcc0 (9 days old) in image stream \"openshift/php\" under tag \"7.2-ubi8\" for \"openshift/php:7.2- ubi8\" Apache 2.4 with PHP 7.2 ----------------------- PHP 7.2 available as container is a base platform for building and running various PHP 7.2 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts. Tags: builder, php, php72, php-72 * A source build using source code from https://github.com/christianh814/openshift-php-upload-demo will be cr eated * The resulting image will be pushed to image stream tag \"file-uploader:latest\" * Use 'oc start-build' to trigger a new build --> Creating resources imagestream.image.openshift.io \"file-uploader\" created buildconfig.build.openshift.io \"file-uploader\" created deployment.apps \"file-uploader\" created service \"file-uploader\" created --> Success Build scheduled, use 'oc logs -f buildconfig/file-uploader' to track its progress. Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose service/file-uploader' Run 'oc status' to view your app.",
"oc logs -f bc/file-uploader -n my-shared-storage",
"Cloning \"https://github.com/christianh814/openshift-php-upload-demo\" [...] Generating dockerfile with builder image image-registry.openshift-image-regis try.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610c 0e05b593844b41d5494ea STEP 1: FROM image-registry.openshift-image-registry.svc:5000/openshift/php@s ha256:d97466f33999951739a76bce922ab17088885db610c0e05b593844b41d5494ea STEP 2: LABEL \"io.openshift.build.commit.author\"=\"Christian Hernandez <christ [email protected]>\" \"io.openshift.build.commit.date\"=\"Sun Oct 1 1 7:15:09 2017 -0700\" \"io.openshift.build.commit.id\"=\"288eda3dff43b02f7f7 b6b6b6f93396ffdf34cb2\" \"io.openshift.build.commit.ref\"=\"master\" \" io.openshift.build.commit.message\"=\"trying to modularize\" \"io.openshift .build.source-location\"=\"https://github.com/christianh814/openshift-php-uploa d-demo\" \"io.openshift.build.image\"=\"image-registry.openshift-image-regi stry.svc:5000/openshift/php@sha256:d97466f33999951739a76bce922ab17088885db610 c0e05b593844b41d5494ea\" STEP 3: ENV OPENSHIFT_BUILD_NAME=\"file-uploader-1\" OPENSHIFT_BUILD_NAMESP ACE=\"my-shared-storage\" OPENSHIFT_BUILD_SOURCE=\"https://github.com/christ ianh814/openshift-php-upload-demo\" OPENSHIFT_BUILD_COMMIT=\"288eda3dff43b0 2f7f7b6b6b6f93396ffdf34cb2\" STEP 4: USER root STEP 5: COPY upload/src /tmp/src STEP 6: RUN chown -R 1001:0 /tmp/src STEP 7: USER 1001 STEP 8: RUN /usr/libexec/s2i/assemble ---> Installing application source => sourcing 20-copy-config.sh ---> 17:24:39 Processing additional arbitrary httpd configuration provide d by s2i => sourcing 00-documentroot.conf => sourcing 50-mpm-tuning.conf => sourcing 40-ssl-certs.sh STEP 9: CMD /usr/libexec/s2i/run STEP 10: COMMIT temp.builder.openshift.io/my-shared-storage/file-uploader-1:3 b83e447 Getting image source signatures [...]",
"oc expose svc/file-uploader -n my-shared-storage",
"oc scale --replicas=4 deploy/file-uploader -n my-shared-storage",
"oc get pods -o wide -n my-shared-storage",
"oc set volume deploy/file-uploader --add --name=my-shared-storage -t pvc --claim-mode=ReadWriteMany --claim-size=10Gi --claim-name=my-shared-storage --claim-class=ocs-storagecluster-cephfs --mount-path=/opt/app-root/src/uploaded -n my-shared-storage",
"oc get pvc -n my-shared-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-shared-storage Bound pvc-5402cc8a-e874-4d7e-af76-1eb05bd2e7c7 10Gi RWX ocs-storagecluster-cephfs 52s",
"oc get deployment file-uploader -o yaml -n my-shared-storage | less",
"[...] spec: progressDeadlineSeconds: 600 replicas: 4 revisionHistoryLimit: 10 selector: matchLabels: deployment: file-uploader strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: openshift.io/generated-by: OpenShiftNewApp creationTimestamp: null labels: deployment: file-uploader spec: # <-- Start inserted lines after here containers: # <-- End inserted lines before here - image: image-registry.openshift-image-registry.svc:5000/my-shared-storage/file-uploader@sha256:a458ea62f990e431ad7d5f84c89e2fa27bdebdd5e29c5418c70c56eb81f0a26b imagePullPolicy: IfNotPresent name: file-uploader [...]",
"oc edit deployment file-uploader -n my-shared-storage",
"[...] spec: topologySpreadConstraints: - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: deployment: file-uploader maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway nodeSelector: node-role.kubernetes.io/worker: \"\" containers: [...]",
"deployment.apps/file-uploader edited",
"oc scale deployment file-uploader --replicas=0 -n my-shared-storage",
"deployment.apps/file-uploader scaled",
"oc scale deployment file-uploader --replicas=4 -n my-shared-storage",
"deployment.apps/file-uploader scaled",
"oc get pods -o wide -n my-shared-storage | egrep '^file-uploader'| grep -v build | awk '{print USD7}' | sort | uniq -c",
"1 perf1-mz8bt-worker-d2hdm 1 perf1-mz8bt-worker-k68rv 1 perf1-mz8bt-worker-ntkp8 1 perf1-mz8bt-worker-qpwsr",
"oc get nodes -L topology.kubernetes.io/zone | grep datacenter | grep -v master",
"perf1-mz8bt-worker-d2hdm Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-k68rv Ready worker 35d v1.20.0+5fbfd19 datacenter1 perf1-mz8bt-worker-ntkp8 Ready worker 35d v1.20.0+5fbfd19 datacenter2 perf1-mz8bt-worker-qpwsr Ready worker 35d v1.20.0+5fbfd19 datacenter2",
"oc get route file-uploader -n my-shared-storage -o jsonpath --template=\"http://{.spec.host}{'\\n'}\"",
"http://file-uploader-my-shared-storage.apps.cluster-ocs4-abdf.ocs4-abdf.sandbox744.opentlc.com"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_metro-dr_stretch_cluster/installing_zone_aware_sample_application |
Chapter 5. Customizing the Ceph Storage cluster | Chapter 5. Customizing the Ceph Storage cluster Director deploys containerized Red Hat Ceph Storage using a default configuration. You can customize Ceph Storage by overriding the default settings. Prerequistes To deploy containerized Ceph Storage you must include the /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml file during overcloud deployment. This environment file defines the following resources: CephAnsibleDisksConfig - This resource maps the Ceph Storage node disk layout. For more information, see Section 5.3, "Mapping the Ceph Storage node disk layout" . CephConfigOverrides - This resource applies all other custom settings to your Ceph Storage cluster. Use these resources to override any defaults that the director sets for containerized Ceph Storage. Procedure Enable the Red Hat Ceph Storage 4 Tools repository: Install the ceph-ansible package on the undercloud: To customize your Ceph Storage cluster, define custom parameters in a new environment file, for example, /home/stack/templates/ceph-config.yaml . You can apply Ceph Storage cluster settings with the following syntax in the parameter_defaults section of your environment file: Note You can apply the CephConfigOverrides parameter to the [global] section of the ceph.conf file, as well as any other section, such as [osd] , [mon] , and [client] . If you specify a section, the key:value data goes into the specified section. If you do not specify a section, the data goes into the [global] section by default. For information about Ceph Storage configuration, customization, and supported parameters, see Red Hat Ceph Storage Configuration Guide . Replace KEY and VALUE with the Ceph cluster settings that you want to apply. For example, in the global section, max_open_files is the KEY and 131072 is the corresponding VALUE : This configuration results in the following settings defined in the configuration file of your Ceph cluster: 5.1. Setting ceph-ansible group variables The ceph-ansible tool is a playbook used to install and manage Ceph Storage clusters. The ceph-ansible tool has a group_vars directory that defines configuration options and the default settings for those options. Use the group_vars directory to set Ceph Storage parameters. For information about the group_vars directory, see Installing a Red Hat Ceph Storage cluster in the Installation Guide . To change the variable defaults in director, use the CephAnsibleExtraConfig parameter to pass the new values in heat environment files. For example, to set the ceph-ansible group variable journal_size to 40960, create an environment file with the following journal_size definition: Important Change ceph-ansible group variables with the override parameters; do not edit group variables directly in the /usr/share/ceph-ansible directory on the undercloud. 5.2. Ceph containers for Red Hat OpenStack Platform with Ceph Storage A Ceph container is required to configure Red Hat OpenStack Platform (RHOSP) to use Ceph, even with an external Ceph cluster. To be compatible with Red Hat Enterprise Linux 8, RHOSP 16.0 requires Red Hat Ceph Storage 4. The Ceph Storage 4 container is hosted at registry.redhat.io, a registry which requires authentication. You can use the heat environment parameter ContainerImageRegistryCredentials to authenticate at registry.redhat.io , as described in Container image preparation parameters . 5.3. Mapping the Ceph Storage node disk layout When you deploy containerized Ceph Storage, you must map the disk layout and specify dedicated block devices for the Ceph OSD service. You can perform this mapping in the environment file that you created earlier to define your custom Ceph parameters: /home/stack/templates/ceph-config.yaml . Use the CephAnsibleDisksConfig resource in parameter_defaults to map your disk layout. This resource uses the following variables: Variable Required? Default value (if unset) Description osd_scenario Yes lvm NOTE: The default value is lvm . The lvm value allows ceph-ansible to use ceph-volume to configure OSDs and BlueStore WAL devices. devices Yes NONE. Variable must be set. A list of block devices that you want to use for OSDs on the node. dedicated_devices Yes (only if osd_scenario is non-collocated ) devices A list of block devices that maps each entry in the devices parameter to a dedicated journaling block device. You can use this variable only when osd_scenario=non-collocated . dmcrypt No false Sets whether data stored on OSDs is encrypted ( true ) or unencrypted ( false ). osd_objectstore No bluestore NOTE: The default value is bluestore . Sets the storage back end used by Ceph. 5.3.1. Using BlueStore To specify the block devices that you want to use as Ceph OSDs, use a variation of the following snippet: Because /dev/nvme0n1 is in a higher performing device class, the example parameter defaults produce three OSDs that run on /dev/sdb , /dev/sdc , and /dev/sdd . The three OSDs use /dev/nvme0n1 as a BlueStore WAL device. The ceph-volume tool does this by using the batch subcommand. The same setup is duplicated for each Ceph storage node and assumes uniform hardware. If the BlueStore WAL data resides on the same disks as the OSDs, then change the parameter defaults: 5.3.2. Referring to devices with persistent names In some nodes, disk paths, such as /dev/sdb and /dev/sdc , may not point to the same block device during reboots. If this is the case with your CephStorage nodes, specify each disk with the /dev/disk/by-path/ symlink to ensure that the block device mapping is consistent throughout deployments: Because you must set the list of OSD devices prior to overcloud deployment, it may not be possible to identify and set the PCI path of disk devices. In this case, gather the /dev/disk/by-path/symlink data for block devices during introspection. In the following example, run the first command to download the introspection data from the undercloud Object Storage service (swift) for the server b08-h03-r620-hci and saves the data in a file called b08-h03-r620-hci.json . Run the second command to grep for "by-path". The output of this command contains the unique /dev/disk/by-path values that you can use to identify disks. For more information about naming conventions for storage devices, see Overview of persistent naming attributes in the Managing storage devices guide. For details about each journaling scenario and disk mapping for containerized Ceph Storage, see the OSD Scenarios section of the project documentation for ceph-ansible . 5.4. Assigning custom attributes to different Ceph pools By default, Ceph pools created with director have the same number of placement groups ( pg_num and pgp_num ) and sizes. You can use either method in Chapter 5, Customizing the Ceph Storage cluster to override these settings globally; that is, doing so applies the same values for all pools. You can also apply different attributes to each Ceph pool. To do so, use the CephPools parameter: Replace POOL with the name of the pool that you want to configure and the pg_num setting to indicate the number of placement groups. This overrides the default pg_num for the specified pool. If you use the CephPools parameter, you must also specify the application type. The application type for Compute, Block Storage, and Image Storage should be rbd , as shown in the examples, but depending on what the pool is used for, you might need to specify a different application type. For example, the application type for the gnocchi metrics pool is openstack_gnocchi . For more information, see Enable Application in the Storage Strategies Guide . If you do not use the CephPools parameter, director sets the appropriate application type automatically, but only for the default pool list. You can also create new custom pools through the CephPools parameter. For example, to add a pool called custompool : This creates a new custom pool in addition to the default pools. Tip For typical pool configurations of common Ceph use cases, see the Ceph Placement Groups (PGs) per Pool Calculator . This calculator is normally used to generate the commands for manually configuring your Ceph pools. In this deployment, the director will configure the pools based on your specifications. Warning Red Hat Ceph Storage 3 (Luminous) introduced a hard limit on the maximum number of PGs an OSD can have, which is 200 by default. Do not override this parameter beyond 200. If there is a problem because the Ceph PG number exceeds the maximum, adjust the pg_num per pool to address the problem, not the mon_max_pg_per_osd . 5.5. Mapping the disk layout to non-homogeneous Ceph Storage nodes By default, all nodes of a role that host Ceph OSDs (indicated by the OS::TripleO::Services::CephOSD service in roles_data.yaml ), for example CephStorage or ComputeHCI nodes, use the global devices and dedicated_devices lists set in Section 5.3, "Mapping the Ceph Storage node disk layout" . This assumes that all of these servers have homogeneous hardware. If a subset of these servers do not have homogeneous hardware, then director needs to be aware that each of these servers has different devices and dedicated_devices lists. This is known as a node-specific disk configuration . To pass a node-specific disk configuration to director, you must pass a heat environment file, such as node-spec-overrides.yaml , to the openstack overcloud deploy command and the file content must identify each server by a machine unique UUID and a list of local variables to override the global variables. You can extract the machine unique UUID for each individual server or from the Ironic database. To locate the UUID for an individual server, log in to the server and enter the following command: To extract the UUID from the Ironic database, enter the following command on the undercloud: Warning If the undercloud.conf does not have inspection_extras = true before undercloud installation or upgrade and introspection, then the machine unique UUID is not in the Ironic database. Important The machine unique UUID is not the Ironic UUID. A valid node-spec-overrides.yaml file might look like the following: All lines after the first two lines must be valid JSON. An easy way to verify that the JSON is valid is to use the jq command: Remove the first two lines ( parameter_defaults: and NodeDataLookup: ) from the file temporarily. Enter cat node-spec-overrides.yaml | jq . As the node-spec-overrides.yaml file grows, jq might also be used to ensure that the embedded JSON is valid. For example, because the devices and dedicated_devices list must be the same length, use the following command to verify that they are the same length before you start the deployment. In the above example, the node-spec-c05-h17-h21-h25-6048r.yaml has three servers in rack c05 in which slots h17, h21, and h25 are missing disks. A more complicated example is included at the end of this section. After the JSON has been validated add back the two lines which makes it a valid environment YAML file ( parameter_defaults: and NodeDataLookup: ) and include it with a -e in the deployment. In the example below, the updated heat environment file uses NodeDataLookup for Ceph deployment. All of the servers had a devices list with 35 disks except one of them had a disk missing. This environment file overrides the default devices list for only that single node and gives it the list of 34 disks it must use instead of the global list. 5.6. Increasing the restart delay for large Ceph clusters During deployment, Ceph services such as OSDs and Monitors, are restarted and the deployment does not continue until the service is running again. Ansible waits 15 seconds (the delay) and checks 5 times for the service to start (the retries). If the service does not restart, the deployment stops so the operator can intervene. Depending on the size of the Ceph cluster, you may need to increase the retry or delay values. The exact names of these parameters and their defaults are as follows: Procedure Update the CephAnsibleExtraConfig parameter to change the default delay and retry values: This example makes the cluster check 30 times and wait 40 seconds between each check for the Ceph OSDs, and check 20 times and wait 10 seconds between each check for the Ceph MONs. To incorporate the changes, pass the updated yaml file with -e using openstack overcloud deploy . 5.7. Overriding Ansible environment variables The Red Hat OpenStack Platform Workflow service (mistral) uses Ansible to configure Ceph Storage, but you can customize the Ansible environment by using Ansible environment variables. Procedure To override an ANSIBLE_* environment variable, use the CephAnsibleEnvironmentVariables heat template parameter. This example configuration increases the number of forks and SSH retries: For more information about Ansible environment variables, see Ansible Configuration Settings . For more information about how to customize your Ceph Storage cluster, see Customizing the Ceph Storage cluster . | [
"sudo subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms",
"sudo dnf install ceph-ansible",
"parameter_defaults: CephConfigOverrides: section: KEY:VALUE",
"parameter_defaults: CephConfigOverrides: global: max_open_files: 131072 osd: osd_scrub_during_recovery: false",
"[global] max_open_files = 131072 [osd] osd_scrub_during_recovery = false",
"parameter_defaults: CephAnsibleExtraConfig: journal_size: 40960",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd - /dev/nvme0n1 osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/sdb - /dev/sdc - /dev/sdd osd_scenario: lvm osd_objectstore: bluestore",
"parameter_defaults: CephAnsibleDisksConfig: devices: - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:10:0 - /dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:11:0 dedicated_devices - /dev/nvme0n1 - /dev/nvme0n1",
"(undercloud) [stack@b08-h02-r620 ironic]USD openstack baremetal introspection data save b08-h03-r620-hci | jq . > b08-h03-r620-hci.json (undercloud) [stack@b08-h02-r620 ironic]USD grep by-path b08-h03-r620-hci.json \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:3:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:4:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:5:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:6:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:7:0\", \"by_path\": \"/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0\",",
"parameter_defaults: CephPools: - name: POOL pg_num: 128 application: rbd",
"parameter_defaults: CephPools: - name: custompool pg_num: 128 application: rbd",
"dmidecode -s system-uuid",
"openstack baremetal introspection data save NODE-ID | jq .extra.system.product.uuid",
"parameter_defaults: NodeDataLookup: {\"32E87B4C-C4A7-418E-865B-191684A6883B\": {\"devices\": [\"/dev/sdc\"]}}",
"(undercloud) [stack@b08-h02-r620 tht]USD cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .devices | length' 33 30 33 (undercloud) [stack@b08-h02-r620 tht]USD cat node-spec-c05-h17-h21-h25-6048r.yaml | jq '.[] | .dedicated_devices | length' 33 30 33 (undercloud) [stack@b08-h02-r620 tht]USD",
"parameter_defaults: # c05-h01-6048r is missing scsi-0:2:35:0 (00000000-0000-0000-0000-0CC47A6EFD0C) NodeDataLookup: { \"00000000-0000-0000-0000-0CC47A6EFD0C\": { \"devices\": [ \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:1:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:32:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:2:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:3:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:4:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:5:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:6:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:33:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:7:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:8:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:34:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:9:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:10:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:11:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:12:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:13:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:14:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:15:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:16:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:17:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:18:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:19:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:20:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:21:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:22:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:23:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:24:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:25:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:26:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:27:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:28:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:29:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:30:0\", \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:31:0\" ], \"dedicated_devices\": [ \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:81:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\", \"/dev/disk/by-path/pci-0000:84:00.0-nvme-1\" ] } }",
"health_mon_check_retries: 5 health_mon_check_delay: 15 health_osd_check_retries: 5 health_osd_check_delay: 15",
"parameter_defaults: CephAnsibleExtraConfig: health_osd_check_delay: 40 health_osd_check_retries: 30 health_mon_check_delay: 20 health_mon_check_retries: 10",
"parameter_defaults: CephAnsibleEnvironmentVariables: ANSIBLE_SSH_RETRIES: '6' DEFAULT_FORKS: '35'"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/deploying_an_overcloud_with_containerized_red_hat_ceph/configuring_ceph_storage_cluster_settings |
Chapter 10. Hardware Enablement | Chapter 10. Hardware Enablement genwqe-tools rebased to version 4.0.20 on IBM POWER The genwqe-tools packages have been rebased to version 4.0.20 for IBM POWER architectures. This version provides a number of bug fixes and enhancements over the version, most notably: CompressBound has been fixed Debugging tools have been added The genwqe_cksum tool has been fixed Missing manual pages in the spec file have been fixed New compiler warnings have been fixed Z_STREAM_END detection circumvention has been improved (BZ#1521050) Memory Mode for Optane DC Persistent Memory technology is fully supported Intel(R) Optane DC Persistent Memory storage devices provide data center-class persistent memory technology, which can significantly increase transaction throughput. To use the Memory Mode technology, your system does not require any special drivers or specific certification. Memory Mode is transparent to the operating system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_hardware_enablement |
12.5. Creating Custom Notifications for the CA | 12.5. Creating Custom Notifications for the CA It can be possible to create custom notification functions to handle other PKI operations, such as token enrollments, by editing existing email notifications plug-ins for the Certificate System CA. Before attempting to create or use custom notification plug-ins, contact Red Hat support services. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/creating-custom-notifications |
Part III. Technology Previews | Part III. Technology Previews This part provides an overview of Technology Previews introduced or updated in Red Hat Enterprise Linux 7.3. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology-previews |
Chapter 1. Introducing .NET 8.0 | Chapter 1. Introducing .NET 8.0 .NET is a general-purpose development platform featuring automatic memory management and modern programming languages. Using .NET, you can build high-quality applications efficiently. .NET is available on Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform through certified containers. .NET offers the following features: The ability to follow a microservices-based approach, where some components are built with .NET and others with Java, but all can run on a common, supported platform on RHEL and OpenShift Container Platform. The capacity to more easily develop new .NET workloads on Microsoft Windows. You can deploy and run your applications on either RHEL or Windows Server. A heterogeneous data center, where the underlying infrastructure is capable of running .NET applications without having to rely solely on Windows Server. .NET 8.0 is supported on RHEL 8.9 and later, RHEL 9.3 and later, and supported OpenShift Container Platform versions. | null | https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/introducing-dotnet_getting-started-with-dotnet-on-rhel-8 |
Chapter 8. Synchronizing content between Satellite Servers | Chapter 8. Synchronizing content between Satellite Servers In a Satellite setup with multiple Satellite Servers, you can use Inter-Satellite Synchronization (ISS) to synchronize content from one upstream server to one or more downstream servers. There are two possible ISS configurations of Satellite, depending on how you deployed your infrastructure. Configure your Satellite for ISS as appropriate for your scenario. For more information, see Inter-Satellite Synchronization scenarios in Installing Satellite Server in a disconnected network environment . To change the Pulp export path, see Hammer content export fails with "Path '/the/path' is not an allowed export path" in the Red Hat Knowledgebase . 8.1. Content synchronization by using export and import There are multiple approaches for synchronizing content by using the export and import workflow: You employ the upstream Satellite Server as a content store, which means that you sync the whole Library rather than content view versions. This approach offers the simplest export/import workflow. In such case, you can manage the content view versions downstream. For more information, see Section 8.1.1, "Using an upstream Satellite Server as a content store" . You use the upstream Satellite Server to sync content view versions. This approach offers more control over what content is synced between Satellite Servers. For more information, see Section 8.1.2, "Using an upstream Satellite Server to synchronize content view versions" . You sync a single repository. This can be useful if you use the content-view syncing approach, but you want to sync an additional repository without adding it to an existing content view. For more information, see Section 8.1.3, "Synchronizing a single repository" . Note Synchronizing content by using export and import requires the same major, minor, and patch version of Satellite on both the downstream and upstream Satellite Servers. When you are unable to match upstream and downstream Satellite versions, you can use: Syncable exports and imports. Inter-Satellite Synchronization (ISS) with your upstream Satellite connected to the Internet and your downstream Satellite connected to the upstream Satellite. 8.1.1. Using an upstream Satellite Server as a content store In this scenario, you use the upstream Satellite Server as a content store for updates rather than to manage content. You use the downstream Satellite Server to manage content for all infrastructure behind the isolated network. You export the Library content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom product and then synchronize repositories . Synchronize the enabled content: On the first export, perform a complete Library export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete Library export, see Section 8.3, "Exporting the Library environment" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For example, if you enable and synchronize a new repository, the exported content archive contains content only from the newly enabled repository. For more information on performing an incremental Library export, see Section 8.6, "Exporting the Library environment incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization using the procedure outlined in Section 8.15, "Importing into the Library environment" . You can then manage content using content views or lifecycle environments as you require. 8.1.2. Using an upstream Satellite Server to synchronize content view versions In this scenario, you use the upstream Satellite Server not only as a content store, but also to synchronize content for all infrastructure behind the isolated network. You curate updates coming from the CDN into content views and lifecycle environments. Once you promote content to a designated lifecycle environment, you can export the content from the upstream Satellite Server and import it into the downstream Satellite Server. On the upstream Satellite Server Ensure that repositories are using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom product and then synchronize repositories . Synchronize the enabled content: For the first export, perform a complete version export on the content view version that you want to export. For more information see, Section 8.7, "Exporting a content view version" . This generates content archives that you can import into one or more downstream Satellite Servers. Export all future updates in the connected Satellite Servers incrementally. This generates leaner content archives that contain changes only from the recent set of updates. For example, if your content view has a new repository, this exported content archive contains only the latest changes. For more information, see Section 8.9, "Exporting a content view version incrementally" . When you have new content, republish the content views that include this content before exporting the increment. For more information, see Chapter 7, Managing content views . This creates a new content view version with the appropriate content to export. On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to the organization that you want. For more information, see Section 8.17, "Importing a content view version" . This will create a content view version from the exported content archives and then import content appropriately. 8.1.3. Synchronizing a single repository In this scenario, you export and import a single repository. On the upstream Satellite Server Ensure that the repository is using the Immediate download policy in one of the following ways: For existing repositories using On Demand , change their download policy on the repository details page to Immediate . For new repositories, ensure that the Default Red Hat Repository download policy setting is set to Immediate before enabling Red Hat repositories, and that the Default download policy is set to Immediate for custom repositories. For more information, see Section 4.9, "Download policies overview" . Enable the content that you want to synchronize. For more information, see Section 4.6, "Enabling Red Hat repositories" . If you want to sync custom content, first create a custom product and then synchronize product repositories . Synchronize the enabled content: On the first export, perform a complete repository export so that all the synchronized content is exported. This generates content archives that you can later import into one or more downstream Satellite Servers. For more information on performing a complete repository export, see Section 8.10, "Exporting a repository" . Export all future updates on the upstream Satellite Server incrementally. This generates leaner content archives that contain only a recent set of updates. For more information on performing an incremental repository export, see Section 8.12, "Exporting a repository incrementally" . On the downstream Satellite Server Bring the content exported from the upstream Satellite Server over to the hard disk. Place it inside a directory under /var/lib/pulp/imports . Import the content to an organization. See Section 8.19, "Importing a repository" . You can then manage content using content views or lifecycle environments as you require. 8.2. Synchronizing a custom repository When using Inter-Satellite Synchronization Network Sync, Red Hat repositories are configured automatically, but custom repositories are not. Use this procedure to synchronize content from a custom repository on a connected Satellite Server to a disconnected Satellite Server through Inter-Satellite Synchronization (ISS) Network Sync. Follow the procedure for the connected Satellite Server before completing the procedure for the disconnected Satellite Server. Connected Satellite Server In the Satellite web UI, navigate to Content > Products . Click on the custom product. Click on the custom repository. Copy the Published At: URL. Continue with the procedure on disconnected Satellite Server. Disconnected Satellite Server Download the katello-server-ca.crt file from the connected Satellite Server: Create an SSL Content Credential with the contents of katello-server-ca.crt . For more information on creating an SSL Content Credential, see Section 4.3, "Importing custom SSL certificates" . In the Satellite web UI, navigate to Content > Products . Create your custom product with the following: Upstream URL : Paste the link that you copied earlier. SSL CA Cert : Select the SSL certificate that was transferred from your connected Satellite Server. For more information on creating a custom product, see Section 4.4, "Creating a custom product" . After completing these steps, the custom repository is properly configured on the disconnected Satellite Server. 8.3. Exporting the Library environment You can export contents of all Yum repositories in the Library environment of an organization to an archive file from Satellite Server and use this archive file to create the same repositories in another Satellite Server or in another Satellite Server organization. The exported archive file contains the following data: A JSON file containing content view version metadata. An archive file containing all the repositories from the Library environment of the organization. Satellite Server exports only RPM, Kickstart files, and Docker content included in the Library environment. Prerequisites Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products that you export to the required date. Procedure Use the organization name or ID to export. Verify that the archive containing the exported version of a content view is located in the export directory: You need all three files, the tar.gz , the toc.json , and the metadata.json file to be able to import. A new content view Export-Library is created in the organization. This content view contains all the repositories belonging to this organization. A new version of this content view is published and exported automatically. Export with chunking In many cases the exported archive content may be several gigabytes in size. If you want to split it into smaller sizes or chunks. You can use the --chunk-size-gb flag directly in the export command to handle this. In the following example, you can see how to specify --chunk-size-gb=2 to split the archives in 2 GB chunks. 8.4. Exporting the library environment in a syncable format You can export contents of all yum repositories, Kickstart repositories and file repositories in the Library environment of an organization to a syncable format that you can use to create your custom CDN and synchronize the content from the custom CDN over HTTP/HTTPS. You can then serve the generated content on a local web server and synchronize it on the importing Satellite Server or in another Satellite Server organization. You can use the generated content to create the same repository in another Satellite Server or in another Satellite Server organization by using content import. On import of the exported archive, a regular content view is created or updated on your importing Satellite Server. For more information, see Section 8.17, "Importing a content view version" . You can export the following content in the syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for all repositories within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products you export to the required date. Ensure that the user exporting the content has the Content Exporter role. Procedure Use the organization name or ID to export: Optional: Verify that the exported content is located in the export directory: 8.5. Importing syncable exports Procedure Use the organization name or ID to import syncable exports: Note Syncable exports must be located in one of your ALLOWED_IMPORT_PATHS as specified in /etc/pulp/settings.py . By default, this includes /var/lib/pulp/imports . 8.6. Exporting the Library environment incrementally Exporting Library content can be a very expensive operation in terms of system resources. Organizations that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can create an incremental export which contains only pieces of content that have changed since the last export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of all repositories in the organization's Library. Procedure Create an incremental export: If you want to create a syncable export, add --format=syncable . By default, Satellite creates an importable export. steps Optional: View the exported data: 8.7. Exporting a content view version You can export a version of a content view to an archive file from Satellite Server and use this archive file to create the same content view version on another Satellite Server or on another Satellite Server organization. Satellite exports composite content views as normal content views. The composite nature is not retained. On importing the exported archive, a regular content view is created or updated on your downstream Satellite Server. The exported archive file contains the following data: A JSON file containing content view version metadata An archive file containing all the repositories included into the content view version You can only export Yum repositories, Kickstart files, and Docker content added to a version of a content view. Satellite does not export the following content: Content view definitions and metadata, such as package filters. Prerequisites To export a content view, ensure that Satellite Server where you want to export meets the following conditions: Ensure that the export directory has free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has free storage space equivalent to the size of the repositories being exported for temporary files created during the export process. Ensure that you set download policy to Immediate for all repositories within the content view you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products that you export to the required date. Ensure that the user exporting the content has the Content Exporter role. To export a content view version List versions of the content view that are available for export: Export a content view version Get the version number of desired version. The following example targets version 1.0 for export. Verify that the archive containing the exported version of a content view is located in the export directory: You require all three files, for example, the tar.gz archive file, the toc.json and metadata.json to import the content successfully. Export with chunking In many cases, the exported archive content can be several gigabytes in size. You might want to split it smaller sizes or chunks. You can use the --chunk-size-gb option with in the hammer content-export command to handle this. The following example uses the --chunk-size-gb=2 to split the archives into 2 GB chunks. 8.8. Exporting a content view version in a syncable format You can export a version of a content view to a syncable format that you can use to create your custom CDN. After you have exported the content view, you can do either of the following: Synchronize the content from your custom CDN over HTTP/HTTPS. Import the content using hammer content-import . Note that this requires both the Export and Import servers to run Satellite 6.16. You can then serve the generated content using a local web server on the importing Satellite Server or in another Satellite Server organization. You cannot directly import Syncable Format exports. Instead, on the importing Satellite Server you must: Copy the generated content to an HTTP/HTTPS web server that is accessible to importing Satellite Server. Update your CDN configuration to Custom CDN . Set the CDN URL to point to the web server. Optional: Set an SSL/TLS CA Credential if the web server requires it. Enable the repository. Synchronize the repository. You can export the following content in a syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for all repositories within the content view you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products you export to the required date. Ensure that the user exporting the content has the Content Exporter role. To export a content view version List versions of the content view that are available for export: Procedure Get the version number of desired version. The following example targets version 1.0 for export: Optional: Verify that the exported content is located in the export directory: 8.9. Exporting a content view version incrementally Exporting complete content view versions can be a very expensive operation in terms of system resources. Content view versions that have multiple Red Hat Enterprise Linux trees can occupy several gigabytes of space on Satellite Server. In such cases, you can create an incremental export which contains only pieces of content that have changed since the last export. Incremental exports typically result in smaller archive files than the full exports. Procedure Create an incremental export: If you want to create a syncable export, add --format=syncable . By default, Satellite creates an importable export. steps Optional: View the exported content view: You can import your exported content view version into Satellite Server. For more information, see Section 8.17, "Importing a content view version" . 8.10. Exporting a repository You can export the content of a repository in the Library environment of an organization from Satellite Server. You can use this archive file to create the same repository in another Satellite Server or in another Satellite Server organization. You can export the following content from Satellite Server: Ansible repositories Kickstart repositories Yum repositories File repositories Docker content The export contains the following data: Two JSON files containing repository metadata. One or more archive files containing the contents of the repository from the Library environment of the organization. You need all the files, tar.gz , toc.json and metadata.json , to be able to import. Prerequisites Ensure that the export directory has enough free storage space to accommodate the export. Ensure that the /var/lib/pulp/exports directory has enough free storage space equivalent to the size of all repositories that you want to export. Ensure that you set download policy to Immediate for the repository within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Ensure that you synchronize products that you export to the required date. Procedure Export a repository: Note The size of the exported archive depends on the number and size of the packages within the repository. If you want to split the exported archive into chunks, export your repository using the --chunk-size-gb argument to limit the size by an integer value in gigabytes, for example ---chunk-size-gb= 2 . Optional: Verify that the exported archive is located in the export directory: 8.11. Exporting a repository in a syncable format You can export the content of a repository in the Library environment of an organization to a syncable format that you can use to create your custom CDN and synchronize the content from the custom CDN over HTTP/HTTPS. You can then serve the generated content using a local web server on the importing Satellite Server or in another Satellite Server organization. You cannot directly import Syncable Format exports. Instead, on the importing Satellite Server you must: Copy the generated content to an HTTP/HTTPS web server that is accessible to importing Satellite Server. Update your CDN configuration to Custom CDN . Set the CDN URL to point to the web server. Optional: Set an SSL/TLS CA Credential if the web server requires it. Enable the repository. Synchronize the repository. You can export the following content in a syncable format from Satellite Server: Yum repositories Kickstart repositories File repositories You cannot export Ansible, Deb, or Docker content. The export contains directories with the packages, listing files, and metadata of the repository in Yum format that can be used to synchronize in the importing Satellite Server. Prerequisites Ensure that you set the download policy to Immediate for the repository within the Library lifecycle environment you export. For more information, see Section 4.9, "Download policies overview" . Procedure Export a repository using the repository name or ID: Optional: Verify that the exported content is located in the export directory: 8.12. Exporting a repository incrementally Exporting a repository can be a very expensive operation in terms of system resources. A typical Red Hat Enterprise Linux tree may occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than the full exports. The example below shows incremental export of a repository in the Library lifecycle environment. Procedure Create an incremental export: Optional: View the exported data: 8.13. Exporting a repository incrementally in a syncable format Exporting a repository can be a very expensive operation in terms of system resources. A typical Red Hat Enterprise Linux tree may occupy several gigabytes of space on Satellite Server. In such cases, you can use Incremental Export to export only pieces of content that changed since the export. Incremental exports typically result in smaller archive files than full exports. The procedure below shows an incremental export of a repository in the Library lifecycle environment. Procedure Create an incremental export: Optional: View the exported data: 8.14. Keeping track of your exports Satellite keeps records of all exports. Each time you export content on the upstream Satellite Server, the export is recorded and maintained for future querying. You can use the records to organize and manage your exports, which is useful especially when exporting incrementally. When exporting content from the upstream Satellite Server for several downstream Satellite Servers, you can also keep track of content exported for specific servers. This helps you track which content was exported and to where. Use the --destination-server argument during export to indicate the target server. This option is available for all content-export operations. Tracking destinations of Library exports Specify the destination server when exporting the Library: Tracking destinations of content view exports Specify the destination server when exporting a content view version: Querying export records List content exports using the following command: 8.15. Importing into the Library environment You can import exported Library content into the Library lifecycle environment of an organization on another Satellite Server. For more information about exporting contents from the Library environment, see Section 8.3, "Exporting the Library environment" . Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: Note you must enter the full path /var/lib/pulp/imports/ My_Exported_Library_Dir . Relative paths do not work. To verify that you imported the Library content, check the contents of the product and repositories. A new content view called Import-Library is created in the target organization. This content view is used to facilitate the Library content import. By default, this content view is not shown in the Satellite web UI. Import-Library is not meant to be assigned directly to hosts. Instead, assign your hosts to Default Organization View or another content view as you would normally. The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.16. Importing into the Library environment from a web server You can import exported Library content directly from a web server into the Library lifecycle environment of an organization on another Satellite Server. For more information about exporting contents from the Library environment, see Section 8.3, "Exporting the Library environment" . Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer role. Procedure Identify the Organization that you wish to import into. To import the Library content to Satellite Server, enter the following command: A new content view called Import-Library is created in the target organization. This content view is used to facilitate the Library content import. By default, this content view is not shown in the Satellite web UI. Import-Library is not meant to be assigned directly to hosts. Instead, assign your hosts to Default Organization View or another content view. 8.17. Importing a content view version You can import an exported content view version to create a version with the same content in an organization on another Satellite Server. For more information about exporting a content view version, see Section 8.7, "Exporting a content view version" . When you import a content view version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. Custom repositories, products and content views are automatically created if they do not exist in the importing organization. Prerequisites The exported files must be in a directory under /var/lib/pulp/imports . If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: To import the content view version to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Version_Dir . Relative paths do not work. To verify that you imported the content view version successfully, list content view versions for your organization: The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.18. Importing a content view version from a web server You can import an exported content view version directly from a web server to create a version with the same content in an organization on another Satellite Server. For more information about exporting a content view version, see Section 8.7, "Exporting a content view version" . When you import a content view version, it has the same major and minor version numbers and contains the same repositories with the same packages and errata. Custom repositories, products, and content views are automatically created if they do not exist in the importing organization. Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If there are any Red Hat repositories in the exported content, the importing organization's manifest must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer role. Procedure Import the content view version into Satellite Server: 8.19. Importing a repository You can import an exported repository into an organization on another Satellite Server. For more information about exporting content of a repository, see Section 8.10, "Exporting a repository" . Prerequisites The export files must be in a directory under /var/lib/pulp/imports . If the export contains any Red Hat repositories, the manifest of the importing organization must contain subscriptions for the products contained within the export. The user importing the content must have the Content Importer Role. Procedure Copy the exported files to a subdirectory of /var/lib/pulp/imports on Satellite Server where you want to import. Set the ownership of the import directory and its contents to pulp:pulp . Verify that the ownership is set correctly: Identify the Organization that you wish to import into. To import the repository content to Satellite Server, enter the following command: Note that you must enter the full path /var/lib/pulp/imports/ My_Exported_Repo_Dir . Relative paths do not work. To verify that you imported the repository, check the contents of the product and repository. The importing Satellite Server extracts the /var/lib/pulp/imports directory to /var/lib/pulp/ . You can empty the /var/lib/pulp/imports directory after a successful import. 8.20. Importing a repository from a web server You can import an exported repository directly from a web server into an organization on another Satellite Server. For more information about exporting the content of a repository, see Section 8.10, "Exporting a repository" . Prerequisites The exported files must be in a syncable format. The exported files must be accessible through HTTP/HTTPS. If the export contains any Red Hat repositories, the manifest of the importing organization must contain subscriptions for the products contained within the export. The user importing the content view version must have the Content Importer Role. Procedure Select the organization into which you want to import. To import the repository to Satellite Server, enter the following command: 8.21. Exporting and importing content using Hammer CLI cheat sheet Table 8.1. Export Intent Command Fully export an Organization's Library hammer content-export complete library --organization=" My_Organization " Incrementally export an Organization's Library (assuming you have exported something previously) hammer content-export incremental library --organization=" My_Organization " Fully export a content view version hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " Export a content view version promoted to the Dev Environment hammer content-export complete version --content-view=" My_Content_View " --organization=" My_Organization " --lifecycle-environment="Dev" Export a content view in smaller chunks (2-GB slabs) hammer content-export complete version --content-view=" My_Content_View " --version=1.0 --organization=" My_Organization " --chunk-size-gb=2 Incrementally export a content view version (assuming you have exported something previously) hammer content-export incremental version --content-view=" My_Content_View " --version=2.0 --organization=" My_Organization " Fully export a Repository hammer content-export complete repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " Incrementally export a Repository (assuming you have exported something previously) hammer content-export incremental repository --product=" My_Product " --name=" My_Repository " --organization=" My_Organization " List exports hammer content-export list --content-view=" My_Content_View " --organization=" My_Organization " Table 8.2. Import Intent Command Import into an Organization's Library hammer content-import library --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Library_Dir " Import to a content view version hammer content-import version --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Version_Dir " Import a Repository hammer content-import repository --organization=" My_Organization " --path="/var/lib/pulp/imports/ My_Exported_Repo_Dir " | [
"curl http://satellite.example.com/pub/katello-server-ca.crt",
"hammer content-export complete library --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 03:35 metadata.json",
"hammer content-export complete library --chunk-size-gb=2 --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/",
"hammer content-export complete library --organization=\" My_Organization \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=\" My_Path_To_Syncable_Export \"",
"hammer content-export incremental library --organization=\" My_Organization \"",
"find /var/lib/pulp/exports/ My_Organization /Export-Library/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \" ---|----------|---------|-------------|----------------------- ID | NAME | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS ---|----------|---------|-------------|----------------------- 5 | view 3.0 | 3.0 | | Library 4 | view 2.0 | 2.0 | | 3 | view 1.0 | 1.0 | | ---|----------|---------|-------------|----------------------",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization / Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export complete version --chunk-size-gb=2 --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=1.0 ls -lh /var/lib/pulp/exports/ My_Organization /view/1.0/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \"",
"hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \" --format=syncable",
"ls -lh /var/lib/pulp/exports/ My_Organization / My_Content_View_Name /1.0/2021-02-25T18-59-26-00-00/",
"hammer content-export incremental version --content-view=\" My_Content_View \" --organization=\" My_Organization \" --version=\" My_Content_View_Version \"",
"find /var/lib/pulp/exports/ My_Organization / My_Exported_Content_View / My_Content_View_Version /",
"hammer content-export complete repository --name=\" My_Repository \" --product=\" My_Product \" --organization=\" My_Organization \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2022-09-02T03-35-24-00-00/",
"hammer content-export complete repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" --format=syncable",
"du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00",
"hammer content-export incremental repository --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/ total 172K -rw-r--r--. 1 pulp pulp 20M Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 root root 492 Mar 2 04:22 metadata.json",
"hammer content-export incremental repository --format=syncable --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"",
"find /var/lib/pulp/exports/Default_Organization/ My_Product /2.0/2023-03-09T10-55-48-05-00/ -name \"*.rpm\"",
"hammer content-export complete library --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export complete version --content-view=\" Content_View_Name \" --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0",
"hammer content-export list --organization=\" My_Organization \"",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import library --organization=\" My_Organization \" --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"hammer content-import library --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-import version --organization= My_Organization --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/",
"hammer content-view version list --organization-id= My_Organization_ID",
"hammer content-import version --organization= My_Organization --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/",
"chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00",
"ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json",
"hammer content-import repository --organization=\" My_Organization \" --path=/var/lib/pulp/imports/ 2021-03-02T03-35-24-00-00",
"hammer content-import repository --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/Synchronizing_Content_Between_Servers_content-management |
Chapter 5. Enabling Windows container workloads | Chapter 5. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the platform: none field set in your install-config.yaml file. You have configured hybrid networking with OVN-Kubernetes for your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because WMCO installs and manages the runtime, it is recommanded that you do not manually install containerd on nodes. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Windows Machine Config Operator prerequisites . 5.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). Note Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. 5.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 5.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 5.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 5.3. Using Windows containers in a proxy-enabled cluster The Windows Machine Config Operator (WMCO) can consume and use a cluster-wide egress proxy configuration when making external requests outside the cluster's internal network. This allows you to add Windows nodes and run workloads in a proxy-enabled cluster, allowing your Windows nodes to pull images from registries that are secured behind your proxy server or to make requests to off-cluster services and services that use a custom public key infrastructure. Note The cluster-wide proxy affects system components only, not user workloads. In proxy-enabled clusters, the WMCO is aware of the NO_PROXY , HTTP_PROXY , and HTTPS_PROXY values that are set for the cluster. The WMCO periodically checks whether the proxy environment variables have changed. If there is a discrepancy, the WMCO reconciles and updates the proxy environment variables on the Windows instances. Windows workloads created on Windows nodes in proxy-enabled clusters do not inherit proxy settings from the node by default, the same as with Linux nodes. Also, by default PowerShell sessions do not inherit proxy settings on Windows nodes in proxy-enabled clusters. Additional resources Configuring the cluster-wide proxy . 5.4. Rebooting a node gracefully The Windows Machine Config Operator (WMCO) minimizes node reboots whenever possible. However, certain operations and updates require a reboot to ensure that changes are applied correctly and securely. To safely reboot your Windows nodes, use the graceful reboot process. For information on gracefully rebooting a standard OpenShift Container Platform node, see "Rebooting a node gracefully" in the Nodes documentation. Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction SSH into the Windows node and enter PowerShell by running the following command: C:\> powershell Restart the node by running the following command: C:\> Restart-Computer -Force Windows nodes on Amazon Web Services (AWS) do not return to READY state after a graceful reboot due to an inconsistency with the EC2 instance metadata routes and the Host Network Service (HNS) networks. After the reboot, SSH into any Windows node on AWS and add the route by running the following command in a shell prompt: C:\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip> where: 169.254.169.254 Specifies the address of the EC2 instance metadata endpoint. 255.255.255.255 Specifies the network mask of the EC2 instance metadata endpoint. <gateway_ip> Specifies the corresponding IP address of the gateway in the Windows instance, which you can find by running the following command: C:\> ipconfig | findstr /C:"Default Gateway" After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional resources Rebooting a OpenShift Container Platform node gracefully Backing up etcd data 5.5. Additional resources Generating a key pair for cluster node SSH access Adding Operators to a cluster | [
"apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f wmco-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator",
"oc create -f <file-name>.yaml",
"oc create -f wmco-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4",
"oc create -f <file-name>.yaml",
"oc create -f wmco-sub.yaml",
"oc get csv -n openshift-windows-machine-config-operator",
"NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded",
"oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1",
"oc adm cordon <node1>",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force",
"error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.",
"oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction",
"C:\\> powershell",
"C:\\> Restart-Computer -Force",
"C:\\> route add 169.254.169.254 mask 255.255.255.0 <gateway_ip>",
"C:\\> ipconfig | findstr /C:\"Default Gateway\"",
"oc adm uncordon <node1>",
"oc get node <node1>",
"NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/windows_container_support_for_openshift/enabling-windows-container-workloads |
Chapter 3. Using shared system certificates | Chapter 3. Using shared system certificates The shared system certificates storage enables NSS, GnuTLS, OpenSSL, and Java to share a default source for retrieving system certificate anchors and block-list information. By default, the truststore contains the Mozilla CA list, including positive and negative trust. The system allows updating the core Mozilla CA list or choosing another certificate list. 3.1. The system-wide truststore In RHEL, the consolidated system-wide truststore is located in the /etc/pki/ca-trust/ and /usr/share/pki/ca-trust-source/ directories. The trust settings in /usr/share/pki/ca-trust-source/ are processed with lower priority than settings in /etc/pki/ca-trust/ . Certificate files are treated depending on the subdirectory they are installed to. For example, trust anchors belong to the /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-trust/source/anchors/ directory. Note In a hierarchical cryptographic system, a trust anchor is an authoritative entity that other parties consider trustworthy. In the X.509 architecture, a root certificate is a trust anchor from which a chain of trust is derived. To enable chain validation, the trusting party must have access to the trust anchor first. Additional resources update-ca-trust(8) and trust(1) man pages on your system 3.2. Adding new certificates To acknowledge applications on your system with a new source of trust, add the corresponding certificate to the system-wide store, and use the update-ca-trust command. Prerequisites The ca-certificates package is present on the system. Procedure To add a certificate in the simple PEM or DER file formats to the list of CAs trusted on the system, copy the certificate file to the /usr/share/pki/ca-trust-source/anchors/ or /etc/pki/ca-trust/source/anchors/ directory, for example: To update the system-wide trust store configuration, use the update-ca-trust command: Note Even though the Firefox browser can use an added certificate without a prior execution of update-ca-trust , enter the update-ca-trust command after every CA change. Also note that browsers, such as Firefox, Chromium, and GNOME Web cache files, and you might have to clear your browser's cache or restart your browser to load the current system certificate configuration. Additional resources update-ca-trust(8) and trust(1) man pages on your system 3.3. Managing trusted system certificates The trust command provides a convenient way for managing certificates in the shared system-wide truststore. To list, extract, add, remove, or change trust anchors, use the trust command. To see the built-in help for this command, enter it without any arguments or with the --help directive: USD trust usage: trust command <args>... Common trust commands are: list List trust or certificates extract Extract certificates and trust extract-compat Extract trust compatibility bundles anchor Add, remove, change trust anchors dump Dump trust objects in internal format See 'trust <command> --help' for more information To list all system trust anchors and certificates, use the trust list command: USD trust list pkcs11:id=%d2%87%b4%e3%df%37%27%93%55%f6%56%ea%81%e5%36%cc%8c%1e%3f%bd;type=cert type: certificate label: ACCVRAIZ1 trust: anchor category: authority pkcs11:id=%a6%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert type: certificate label: ACEDICOM Root trust: anchor category: authority ... To store a trust anchor into the system-wide truststore, use the trust anchor sub-command and specify a path to a certificate. Replace <path.to/certificate.crt> by a path to your certificate and its file name: To remove a certificate, use either a path to a certificate or an ID of a certificate: # trust anchor --remove <path.to/certificate.crt> # trust anchor --remove "pkcs11:id= <%AA%BB%CC%DD%EE> ;type=cert" Additional resources All sub-commands of the trust commands offer a detailed built-in help, for example: USD trust list --help usage: trust list --filter=<what> --filter=<what> filter of what to export ca-anchors certificate anchors ... --purpose=<usage> limit to certificates usable for the purpose server-auth for authenticating servers ... Additional resources update-ca-trust(8) and trust(1) man pages on your system | [
"cp ~/certificate-trust-examples/Cert-trust-test-ca.pem /usr/share/pki/ca-trust-source/anchors/",
"update-ca-trust extract",
"trust usage: trust command <args> Common trust commands are: list List trust or certificates extract Extract certificates and trust extract-compat Extract trust compatibility bundles anchor Add, remove, change trust anchors dump Dump trust objects in internal format See 'trust <command> --help' for more information",
"trust list pkcs11:id=%d2%87%b4%e3%df%37%27%93%55%f6%56%ea%81%e5%36%cc%8c%1e%3f%bd;type=cert type: certificate label: ACCVRAIZ1 trust: anchor category: authority pkcs11:id=%a6%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert type: certificate label: ACEDICOM Root trust: anchor category: authority",
"trust anchor <path.to/certificate.crt>",
"trust anchor --remove <path.to/certificate.crt> trust anchor --remove \"pkcs11:id= <%AA%BB%CC%DD%EE> ;type=cert\"",
"trust list --help usage: trust list --filter=<what> --filter=<what> filter of what to export ca-anchors certificate anchors --purpose=<usage> limit to certificates usable for the purpose server-auth for authenticating servers"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/securing_networks/using-shared-system-certificates_securing-networks |
function::set_kernel_string_n | function::set_kernel_string_n Name function::set_kernel_string_n - Writes a string of given length to kernel memory Synopsis Arguments addr The kernel address to write the string to n The maximum length of the string val The string which is to be written Description Writes the given string up to a maximum given length to a given kernel memory address. Reports an error on string copy fault. Requires the use of guru mode (-g). | [
"set_kernel_string_n(addr:long,n:long,val:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-set-kernel-string-n |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.432/making-open-source-more-inclusive |
Chapter 93. Credit card fraud dispute use case | Chapter 93. Credit card fraud dispute use case The financial industry uses pragmatic AI for decisioning in several areas. One area is credit card charge disputes. When a customer identifies an incorrect or unrecognized charge on a credit card bill, the customer can dispute the charge. Human intervention in credit card fraud detection is required in some cases but the majority of reported credit card fraud can be completely or partially resolved with pragmatic AI. Machine learning models such as TensorflowTM and RTM produce predictive models. You can save these predictive models in an open standard such as PMML so that you can use the model with Red Hat Decision Manager or other products that support the PMML standard. 93.1. Using a PMML model with a DMN model to resolve credit card transaction disputes This example shows you how to use Red Hat Decision Manager to create a DMN model that uses a PMML model to resolve credit card transaction disputes. When a customer disputes a credit card transaction, the system decides whether or not to process the transaction automatically. Prerequisites Red Hat Decision Manager is available and the following JAR file has been added to the ~/kie-server.war/WEB-INF/lib and ~/business-central.war/WEB-INF/lib directories in your Red Hat Decision Manager installation: kie-dmn-jpmml-7.67.0.Final-redhat-00024.jar This file is available in the Red Hat Decision Manager 7.13 Maven Repository distribution available from the Software Downloads page in the Red Hat Customer Portal (login required). The group ID, artifact ID, and version (GAV) identifier of this file is org.kie:kie-dmn-jpmml:7.67.0.Final-redhat-00024 . For more information, see the "Including PMML models within a DMN file in Business Central" section of Designing a decision service using DMN models . JPMML Evaluator 1.5.1 JAR file JPMML Evaluator Extensions 1.5.1 JAR file These files are required to enable JPMML evaluation in KIE Server and Business Central. Important Red Hat supports integration with the Java Evaluator API for PMML (JPMML) for PMML execution in Red Hat Decision Manager. However, Red Hat does not support the JPMML libraries directly. If you include JPMML libraries in your Red Hat Decision Manager distribution, see the Openscoring.io licensing terms for JPMML. Procedure Create the dtree_risk_predictor.pmml file with the contents of the XML example in Section 93.2, "Credit card transaction dispute exercise PMML file" . In Business Central, create the Credit Card Dispute project: Navigate to Menu Design Projects . Click Add Project . In the Name box, enter Credit Card Dispute and click Add . In the Assets window of the Credit Card Dispute project, import the dtree_risk_predictor.pmml file into the com package: Click Import Asset . In the Create new Import Asset dialog, enter dtree_risk_predictor in the Name box, select com from the Package menu, select the dtree_risk_predictor.pmml file, and click OK . The content of the dtree_risk_predictor.pmml file appears in the Overview window. Create the Dispute Transaction Check DMN model in com package: To return to the project window, click Credit Card Dispute in the breadcrumb trail. Click Add Asset . Click DMN in the asset library. In the Create new DMN dialog, enter Dispute Transaction Check in the Name box, select com from the Package menu, and click OK . The DMN editor opens with the Dispute Transaction Check DMN model. Create the tTransaction custom data type: Click the Data Types tab. Click Add a custom Data Type . In the Name box, enter tTransaction . Select Structure from the Type menu. To create the data type, click the check mark. The tTransaction custom data type appears with one variable row. In the Name field of the variable row, enter transaction_amount , select Number from the Type menu, and then click the check mark. To add a new variable row, click the plus symbol on the transaction_amount row. A new row appears. In the Name field, enter cardholder_identifier , select Number from the Type menu, and then click the check mark. Add the Risk Predictor dtree_risk_predictor.pmml model: In the Included Models window of the DMN editor, click Include Model . In the Include Model dialog, select dtree_risk_predictor.pmml from the Models menu. Enter Risk Predictor in the Provide a unique name box and click OK . Create the Risk Predictor Business Knowledge Model (BKM) node with the Risk Predictor and DecisionTreeClassifier model: In the Model window of the DMN editor, drag a BKM node to the DMN editor palette. Rename the node Risk Predictor . Click the edit icon located below the trash can icon on the left side of the node. Click F in the Risk Predictor box and select PMML from the Select Function Kind menu. The F changes to P . Double-click the First select PMML document box and select Risk Predictor . Double-click the Second select PMML model box and select DecisionTreeClassifier . To return to the DMN editor palette, click Back to Dispute Transaction Check . Create the Transaction input data node with the data type tTransaction : In the Model window of the DMN editor, drag an input data node to the DMN editor palette. Rename the node Transaction . Select the node then click the properties pencil icon in the upper-right corner of the window. In the Properties panel, select Information Item Data type tTransaction then close the panel. Create the Transaction Dispute Risk decision node and add the Transaction node for data input and the Risk Predictor node for the function: In the Model window of the DMN editor, drag a decision data node to the DMN editor palette. Rename the node Transaction Dispute Risk . Select the Risk Predictor node and drag the arrow from the top right of the node to the Transaction Dispute Risk node. Select the Transaction node and drag the arrow from the bottom right of the node to the Transaction Dispute Risk node. In the Transaction Dispute Risk node, create the Risk predictor invocation function: Select the Transaction Dispute Risk node and click the edit icon on the left side of the node. Click Select expression and select Invocation from the menu. Enter Risk Predictor in the Enter function box. Click P1 . In the Edit Parameter dialog, enter amount in the Name box, select number from the Data Type menu, and press the Enter key. Click Select expression and select Literal expression from the menu. Enter Transaction.transaction_amount in the box to amount . Right-click on 1 and select Insert below . The Edit Parameter dialog opens. Enter holder_index in the Name box, select number from the Data Type menu, and press the Enter key. Click Select expression on row 2 and select Literal expression from the menu. Enter Transaction.cardholder_identifier in the box to amount . Create the Risk Threshold input data node with the data type number : In the Model window of the DMN editor, drag an input data node to the DMN editor palette. Rename the node Risk Threshold . Select the node then click the properties pencil icon in the upper-right corner of the window. In the Properties panel, select Information Item Data type number then close the panel. Create the Can be automatically processed? decision node that takes as inputs the Transaction Dispute Risk and the Risk threshold nodes: Drag a decision node to the DMN editor palette and rename it Can be automatically processed? . Select the node, then click the edit icon on the upper-left side of the node. Click Select expression and then select Literal expression from the menu. Enter Transaction Dispute Risk.predicted_dispute_risk < Risk Threshold in the box. Select the Transaction Dispute Risk node and drag the arrow in the top left of the node to the Can be automatically processed? node. Select the Risk Threshold node and drag the arrow from the bottom left of the node to the Can be automatically processed? node. Save the model and build the project: In the DMN editor, click Save . If necessary, correct any errors that appear. To return to the project window, click Credit Card Dispute in the breadcrumb trail. Click Build . The project should successfully build. Add and run a test scenario: Click Add Asset . Select Test Scenario . In the Create new Test Scenario dialog, enter the name Test Dispute Transaction Check , select com from the Package menu, and select DMN . Select Dispute Transaction Check.dmn from the Choose a DMN asset menu and click OK . The test template builds. Enter the following values and click Save : Note Do not add a value to the Transaction Dispute Risk column. This value is determined by the test scenario. Table 93.1. Test scenario parameters Description Risk Threshold cardholder_identifier transaction_amount Can be automatically processed? Risk threshold 5, automatically processed 5 1234 1000 true Risk threshold 4, amount = 1000, not processed 4 1234 1000 false Risk threshold 4, amount = 180, automatically processed 4 1234 180 true Risk threshold 1, amount = 1, not processed 1 1234 1 false To run the test, click the Play button, to the right of Validate . The results appear in the Test Report panel on the right of the screen. 93.2. Credit card transaction dispute exercise PMML file Use the following XML content to create the dtree_risk_predictor.pmml file in the Section 93.1, "Using a PMML model with a DMN model to resolve credit card transaction disputes" exercise. <?xml version="1.0" encoding="UTF-8"?> <PMML xmlns="http://www.dmg.org/PMML-4_2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd"> <Header copyright="Copyright (c) 2018 Software AG" description="Default Description"> <Application name="Nyoka" version="4.3.0" /> <Timestamp>2020-10-09 14:27:26.622723</Timestamp> </Header> <DataDictionary numberOfFields="3"> <DataField name="amount" optype="continuous" dataType="double" /> <DataField name="holder_index" optype="continuous" dataType="double" /> <DataField name="dispute_risk" optype="categorical" dataType="integer"> <Value value="1" /> <Value value="2" /> <Value value="3" /> <Value value="4" /> <Value value="5" /> </DataField> </DataDictionary> <TreeModel modelName="DecisionTreeClassifier" functionName="classification" missingValuePenalty="1.0"> <MiningSchema> <MiningField name="amount" usageType="active" optype="continuous" /> <MiningField name="holder_index" usageType="active" optype="continuous" /> <MiningField name="dispute_risk" usageType="target" optype="categorical" /> </MiningSchema> <Output> <OutputField name="probability_1" optype="continuous" dataType="double" feature="probability" value="1" /> <OutputField name="probability_2" optype="continuous" dataType="double" feature="probability" value="2" /> <OutputField name="probability_3" optype="continuous" dataType="double" feature="probability" value="3" /> <OutputField name="probability_4" optype="continuous" dataType="double" feature="probability" value="4" /> <OutputField name="probability_5" optype="continuous" dataType="double" feature="probability" value="5" /> <OutputField name="predicted_dispute_risk" optype="categorical" dataType="integer" feature="predictedValue" /> </Output> <Node id="0" recordCount="600.0"> <True /> <Node id="1" recordCount="200.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="99.94000244140625" /> <Node id="2" score="2" recordCount="55.0"> <SimplePredicate field="holder_index" operator="lessOrEqual" value="0.5" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="55.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="3" score="1" recordCount="145.0"> <SimplePredicate field="holder_index" operator="greaterThan" value="0.5" /> <ScoreDistribution value="1" recordCount="145.0" confidence="1.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="4" recordCount="400.0"> <SimplePredicate field="amount" operator="greaterThan" value="99.94000244140625" /> <Node id="5" recordCount="105.0"> <SimplePredicate field="holder_index" operator="lessOrEqual" value="0.5" /> <Node id="6" score="3" recordCount="54.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="150.4550018310547" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="54.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="7" recordCount="51.0"> <SimplePredicate field="amount" operator="greaterThan" value="150.4550018310547" /> <Node id="8" recordCount="40.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="200.00499725341797" /> <Node id="9" recordCount="36.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="195.4949951171875" /> <Node id="10" recordCount="2.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="152.2050018310547" /> <Node id="11" score="4" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="151.31500244140625" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="12" score="3" recordCount="1.0"> <SimplePredicate field="amount" operator="greaterThan" value="151.31500244140625" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="13" recordCount="34.0"> <SimplePredicate field="amount" operator="greaterThan" value="152.2050018310547" /> <Node id="14" recordCount="20.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="176.5050048828125" /> <Node id="15" recordCount="19.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="176.06500244140625" /> <Node id="16" score="4" recordCount="9.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="166.6449966430664" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="9.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="17" recordCount="10.0"> <SimplePredicate field="amount" operator="greaterThan" value="166.6449966430664" /> <Node id="18" score="3" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="167.97999572753906" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="19" score="4" recordCount="9.0"> <SimplePredicate field="amount" operator="greaterThan" value="167.97999572753906" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="9.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> <Node id="20" score="3" recordCount="1.0"> <SimplePredicate field="amount" operator="greaterThan" value="176.06500244140625" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="21" score="4" recordCount="14.0"> <SimplePredicate field="amount" operator="greaterThan" value="176.5050048828125" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="14.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> <Node id="22" recordCount="4.0"> <SimplePredicate field="amount" operator="greaterThan" value="195.4949951171875" /> <Node id="23" score="3" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="195.76499938964844" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="24" recordCount="3.0"> <SimplePredicate field="amount" operator="greaterThan" value="195.76499938964844" /> <Node id="25" score="4" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="196.74500274658203" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="26" recordCount="2.0"> <SimplePredicate field="amount" operator="greaterThan" value="196.74500274658203" /> <Node id="27" score="3" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="197.5800018310547" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="28" score="4" recordCount="1.0"> <SimplePredicate field="amount" operator="greaterThan" value="197.5800018310547" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> </Node> </Node> <Node id="29" score="5" recordCount="11.0"> <SimplePredicate field="amount" operator="greaterThan" value="200.00499725341797" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="11.0" confidence="1.0" /> </Node> </Node> </Node> <Node id="30" recordCount="295.0"> <SimplePredicate field="holder_index" operator="greaterThan" value="0.5" /> <Node id="31" score="2" recordCount="170.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="150.93499755859375" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="170.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="32" recordCount="125.0"> <SimplePredicate field="amount" operator="greaterThan" value="150.93499755859375" /> <Node id="33" recordCount="80.0"> <SimplePredicate field="holder_index" operator="lessOrEqual" value="2.5" /> <Node id="34" recordCount="66.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="199.13500213623047" /> <Node id="35" score="3" recordCount="10.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="155.56999969482422" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="10.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="36" recordCount="56.0"> <SimplePredicate field="amount" operator="greaterThan" value="155.56999969482422" /> <Node id="37" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="155.9000015258789" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="38" recordCount="55.0"> <SimplePredicate field="amount" operator="greaterThan" value="155.9000015258789" /> <Node id="39" recordCount="31.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="176.3699951171875" /> <Node id="40" recordCount="30.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="175.72000122070312" /> <Node id="41" recordCount="19.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="168.06999969482422" /> <Node id="42" recordCount="6.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="158.125" /> <Node id="43" score="3" recordCount="5.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="157.85499572753906" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="5.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="44" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="greaterThan" value="157.85499572753906" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="45" score="3" recordCount="13.0"> <SimplePredicate field="amount" operator="greaterThan" value="158.125" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="13.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="46" recordCount="11.0"> <SimplePredicate field="amount" operator="greaterThan" value="168.06999969482422" /> <Node id="47" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="168.69499969482422" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="48" recordCount="10.0"> <SimplePredicate field="amount" operator="greaterThan" value="168.69499969482422" /> <Node id="49" recordCount="4.0"> <SimplePredicate field="holder_index" operator="lessOrEqual" value="1.5" /> <Node id="50" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="172.0250015258789" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="51" score="3" recordCount="3.0"> <SimplePredicate field="amount" operator="greaterThan" value="172.0250015258789" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="3.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="52" score="3" recordCount="6.0"> <SimplePredicate field="holder_index" operator="greaterThan" value="1.5" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="6.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> </Node> <Node id="53" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="greaterThan" value="175.72000122070312" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> <Node id="54" recordCount="24.0"> <SimplePredicate field="amount" operator="greaterThan" value="176.3699951171875" /> <Node id="55" score="3" recordCount="16.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="192.0999984741211" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="16.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="56" recordCount="8.0"> <SimplePredicate field="amount" operator="greaterThan" value="192.0999984741211" /> <Node id="57" score="2" recordCount="1.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="192.75499725341797" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="1.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="58" score="3" recordCount="7.0"> <SimplePredicate field="amount" operator="greaterThan" value="192.75499725341797" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="7.0" confidence="1.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> </Node> </Node> </Node> <Node id="59" recordCount="14.0"> <SimplePredicate field="amount" operator="greaterThan" value="199.13500213623047" /> <Node id="60" score="5" recordCount="10.0"> <SimplePredicate field="holder_index" operator="lessOrEqual" value="1.5" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="10.0" confidence="1.0" /> </Node> <Node id="61" score="4" recordCount="4.0"> <SimplePredicate field="holder_index" operator="greaterThan" value="1.5" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="4.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> <Node id="62" recordCount="45.0"> <SimplePredicate field="holder_index" operator="greaterThan" value="2.5" /> <Node id="63" score="2" recordCount="37.0"> <SimplePredicate field="amount" operator="lessOrEqual" value="199.13999938964844" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="37.0" confidence="1.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> <Node id="64" score="4" recordCount="8.0"> <SimplePredicate field="amount" operator="greaterThan" value="199.13999938964844" /> <ScoreDistribution value="1" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="2" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="3" recordCount="0.0" confidence="0.0" /> <ScoreDistribution value="4" recordCount="8.0" confidence="1.0" /> <ScoreDistribution value="5" recordCount="0.0" confidence="0.0" /> </Node> </Node> </Node> </Node> </Node> </Node> </TreeModel> </PMML> | [
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <PMML xmlns=\"http://www.dmg.org/PMML-4_2\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\"> <Header copyright=\"Copyright (c) 2018 Software AG\" description=\"Default Description\"> <Application name=\"Nyoka\" version=\"4.3.0\" /> <Timestamp>2020-10-09 14:27:26.622723</Timestamp> </Header> <DataDictionary numberOfFields=\"3\"> <DataField name=\"amount\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"holder_index\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"dispute_risk\" optype=\"categorical\" dataType=\"integer\"> <Value value=\"1\" /> <Value value=\"2\" /> <Value value=\"3\" /> <Value value=\"4\" /> <Value value=\"5\" /> </DataField> </DataDictionary> <TreeModel modelName=\"DecisionTreeClassifier\" functionName=\"classification\" missingValuePenalty=\"1.0\"> <MiningSchema> <MiningField name=\"amount\" usageType=\"active\" optype=\"continuous\" /> <MiningField name=\"holder_index\" usageType=\"active\" optype=\"continuous\" /> <MiningField name=\"dispute_risk\" usageType=\"target\" optype=\"categorical\" /> </MiningSchema> <Output> <OutputField name=\"probability_1\" optype=\"continuous\" dataType=\"double\" feature=\"probability\" value=\"1\" /> <OutputField name=\"probability_2\" optype=\"continuous\" dataType=\"double\" feature=\"probability\" value=\"2\" /> <OutputField name=\"probability_3\" optype=\"continuous\" dataType=\"double\" feature=\"probability\" value=\"3\" /> <OutputField name=\"probability_4\" optype=\"continuous\" dataType=\"double\" feature=\"probability\" value=\"4\" /> <OutputField name=\"probability_5\" optype=\"continuous\" dataType=\"double\" feature=\"probability\" value=\"5\" /> <OutputField name=\"predicted_dispute_risk\" optype=\"categorical\" dataType=\"integer\" feature=\"predictedValue\" /> </Output> <Node id=\"0\" recordCount=\"600.0\"> <True /> <Node id=\"1\" recordCount=\"200.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"99.94000244140625\" /> <Node id=\"2\" score=\"2\" recordCount=\"55.0\"> <SimplePredicate field=\"holder_index\" operator=\"lessOrEqual\" value=\"0.5\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"55.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"3\" score=\"1\" recordCount=\"145.0\"> <SimplePredicate field=\"holder_index\" operator=\"greaterThan\" value=\"0.5\" /> <ScoreDistribution value=\"1\" recordCount=\"145.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"4\" recordCount=\"400.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"99.94000244140625\" /> <Node id=\"5\" recordCount=\"105.0\"> <SimplePredicate field=\"holder_index\" operator=\"lessOrEqual\" value=\"0.5\" /> <Node id=\"6\" score=\"3\" recordCount=\"54.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"150.4550018310547\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"54.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"7\" recordCount=\"51.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"150.4550018310547\" /> <Node id=\"8\" recordCount=\"40.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"200.00499725341797\" /> <Node id=\"9\" recordCount=\"36.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"195.4949951171875\" /> <Node id=\"10\" recordCount=\"2.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"152.2050018310547\" /> <Node id=\"11\" score=\"4\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"151.31500244140625\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"12\" score=\"3\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"151.31500244140625\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"13\" recordCount=\"34.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"152.2050018310547\" /> <Node id=\"14\" recordCount=\"20.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"176.5050048828125\" /> <Node id=\"15\" recordCount=\"19.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"176.06500244140625\" /> <Node id=\"16\" score=\"4\" recordCount=\"9.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"166.6449966430664\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"9.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"17\" recordCount=\"10.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"166.6449966430664\" /> <Node id=\"18\" score=\"3\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"167.97999572753906\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"19\" score=\"4\" recordCount=\"9.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"167.97999572753906\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"9.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> <Node id=\"20\" score=\"3\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"176.06500244140625\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"21\" score=\"4\" recordCount=\"14.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"176.5050048828125\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"14.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> <Node id=\"22\" recordCount=\"4.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"195.4949951171875\" /> <Node id=\"23\" score=\"3\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"195.76499938964844\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"24\" recordCount=\"3.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"195.76499938964844\" /> <Node id=\"25\" score=\"4\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"196.74500274658203\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"26\" recordCount=\"2.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"196.74500274658203\" /> <Node id=\"27\" score=\"3\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"197.5800018310547\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"28\" score=\"4\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"197.5800018310547\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> </Node> </Node> <Node id=\"29\" score=\"5\" recordCount=\"11.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"200.00499725341797\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"11.0\" confidence=\"1.0\" /> </Node> </Node> </Node> <Node id=\"30\" recordCount=\"295.0\"> <SimplePredicate field=\"holder_index\" operator=\"greaterThan\" value=\"0.5\" /> <Node id=\"31\" score=\"2\" recordCount=\"170.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"150.93499755859375\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"170.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"32\" recordCount=\"125.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"150.93499755859375\" /> <Node id=\"33\" recordCount=\"80.0\"> <SimplePredicate field=\"holder_index\" operator=\"lessOrEqual\" value=\"2.5\" /> <Node id=\"34\" recordCount=\"66.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"199.13500213623047\" /> <Node id=\"35\" score=\"3\" recordCount=\"10.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"155.56999969482422\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"10.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"36\" recordCount=\"56.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"155.56999969482422\" /> <Node id=\"37\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"155.9000015258789\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"38\" recordCount=\"55.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"155.9000015258789\" /> <Node id=\"39\" recordCount=\"31.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"176.3699951171875\" /> <Node id=\"40\" recordCount=\"30.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"175.72000122070312\" /> <Node id=\"41\" recordCount=\"19.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"168.06999969482422\" /> <Node id=\"42\" recordCount=\"6.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"158.125\" /> <Node id=\"43\" score=\"3\" recordCount=\"5.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"157.85499572753906\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"5.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"44\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"157.85499572753906\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"45\" score=\"3\" recordCount=\"13.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"158.125\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"13.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"46\" recordCount=\"11.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"168.06999969482422\" /> <Node id=\"47\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"168.69499969482422\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"48\" recordCount=\"10.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"168.69499969482422\" /> <Node id=\"49\" recordCount=\"4.0\"> <SimplePredicate field=\"holder_index\" operator=\"lessOrEqual\" value=\"1.5\" /> <Node id=\"50\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"172.0250015258789\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"51\" score=\"3\" recordCount=\"3.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"172.0250015258789\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"3.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"52\" score=\"3\" recordCount=\"6.0\"> <SimplePredicate field=\"holder_index\" operator=\"greaterThan\" value=\"1.5\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"6.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> </Node> <Node id=\"53\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"175.72000122070312\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> <Node id=\"54\" recordCount=\"24.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"176.3699951171875\" /> <Node id=\"55\" score=\"3\" recordCount=\"16.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"192.0999984741211\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"16.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"56\" recordCount=\"8.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"192.0999984741211\" /> <Node id=\"57\" score=\"2\" recordCount=\"1.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"192.75499725341797\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"1.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"58\" score=\"3\" recordCount=\"7.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"192.75499725341797\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"7.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> </Node> </Node> </Node> <Node id=\"59\" recordCount=\"14.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"199.13500213623047\" /> <Node id=\"60\" score=\"5\" recordCount=\"10.0\"> <SimplePredicate field=\"holder_index\" operator=\"lessOrEqual\" value=\"1.5\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"10.0\" confidence=\"1.0\" /> </Node> <Node id=\"61\" score=\"4\" recordCount=\"4.0\"> <SimplePredicate field=\"holder_index\" operator=\"greaterThan\" value=\"1.5\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"4.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> <Node id=\"62\" recordCount=\"45.0\"> <SimplePredicate field=\"holder_index\" operator=\"greaterThan\" value=\"2.5\" /> <Node id=\"63\" score=\"2\" recordCount=\"37.0\"> <SimplePredicate field=\"amount\" operator=\"lessOrEqual\" value=\"199.13999938964844\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"37.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> <Node id=\"64\" score=\"4\" recordCount=\"8.0\"> <SimplePredicate field=\"amount\" operator=\"greaterThan\" value=\"199.13999938964844\" /> <ScoreDistribution value=\"1\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"2\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"3\" recordCount=\"0.0\" confidence=\"0.0\" /> <ScoreDistribution value=\"4\" recordCount=\"8.0\" confidence=\"1.0\" /> <ScoreDistribution value=\"5\" recordCount=\"0.0\" confidence=\"0.0\" /> </Node> </Node> </Node> </Node> </Node> </Node> </TreeModel> </PMML>"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/ai-credit-card-con_artificial-intelligence |
Chapter 26. Configure JGroups | Chapter 26. Configure JGroups JGroups is the underlying group communication library used to connect Red Hat JBoss Data Grid instances. For a full list of JGroups protocols supported in JBoss Data Grid, see Section A.1, "Supported JGroups Protocols" Report a bug 26.1. Configure Red Hat JBoss Data Grid Interface Binding (Remote Client-Server Mode) 26.1.1. Interfaces Red Hat JBoss Data Grid allows users to specify an interface type rather than a specific (unknown) IP address. link-local : Uses a 169. x . x . x or 254. x . x . x address. This suits the traffic within one box. site-local : Uses a private IP address, for example 192.168. x . x . This prevents extra bandwidth charged from GoGrid, and similar providers. global : Picks a public IP address. This should be avoided for replication traffic. non-loopback : Uses the first address found on an active interface that is not a 127. x . x . x address. Report a bug 26.1.2. Binding Sockets Socket bindings provide a named the combination of interface and port. Sockets can be bound to the interface either individually or using a socket binding group. Report a bug 26.1.2.1. Binding a Single Socket Example The following is an example depicting the use of JGroups interface socket binding to bind an individual socket using the socket-binding element. Example 26.1. Socket Binding Report a bug 26.1.2.2. Binding a Group of Sockets Example The following is an example depicting the use of Groups interface socket bindings to bind a group, using the socket-binding-group element: Example 26.2. Bind a Group The two sample socket bindings in the example are bound to the same default-interface ( global ), therefore the interface attribute does not need to be specified. Report a bug 26.1.3. Configure JGroups Socket Binding Each JGroups stack, configured in the JGroups subsystem, uses a specific socket binding. Set up the socket binding as follows: Example 26.3. JGroups UDP Socket Binding Configuration The following example uses UDP to automatically detect additional nodes on the network: Example 26.4. JGroups TCP Socket Binding Configuration The following example uses TCP to establish direct communication between two clusters nodes. In the example below node1 is located at 192.168.1.2:7600, and node2 is located at 192.168.1.3:7600. The port in use will be defined by the jgroups-tcp property in the socket-binding section. The decision of UDP vs TCP must be made in each environment. By default JGroups uses UDP, as it allows for dynamic detection of clustered members and scales better in larger clusters due to a smaller network footprint. In addition, when using UDP only one packet per cluster is required, as multicast packets are received by all subscribers to the multicast address; however, in environments where multicast traffic is prohibited, or if UDP traffic can not reach the remote cluster nodes, such as when cluster members are on separate VLANs, TCP traffic can be used to create a cluster. Important When using UDP as the JGroups transport, the socket binding has to specify the regular (unicast) port, multicast address, and multicast port. Report a bug | [
"<interfaces> <interface name=\"link-local\"> <link-local-address/> </interface> <!-- Additional configuration elements here --> </interfaces>",
"<interfaces> <interface name=\"site-local\"> <site-local-address/> </interface> <!-- Additional configuration elements here --> </interfaces>",
"<interfaces> <interface name=\"global\"> <any-address/> </interface> <!-- Additional configuration elements here --> </interfaces>",
"<interfaces> <interface name=\"non-loopback\"> <not> <loopback /> </not> </interface> </interfaces>",
"<socket-binding name=\"jgroups-udp\" <!-- Additional configuration elements here --> interface=\"site-local\"/>",
"<socket-binding-group name=\"ha-sockets\" default-interface=\"global\"> <!-- Additional configuration elements here --> <socket-binding name=\"jgroups-tcp\" port=\"7600\"/> <socket-binding name=\"jgroups-tcp-fd\" port=\"57600\"/> <!-- Additional configuration elements here --> </socket-binding-group>",
"<subsystem xmlns=\"urn:jboss:domain:jgroups:1.2\" default-stack=\"udp\"> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"> <!-- Additional configuration elements here --> </transport> <!-- rest of protocols --> </stack> </subsystem>",
"<subsystem xmlns=\"urn:infinispan:server:jgroups:6.1\" default-stack=\"tcp\"> <stack name=\"tcp\"> <transport type=\"TCP\" socket-binding=\"jgroups-tcp\"/> <protocol type=\"TCPPING\"> <property name=\"initial_hosts\">192.168.1.2[7600],192.168.1.3[7600]</property> <property name=\"num_initial_members\">2</property> <property name=\"port_range\">0</property> <property name=\"timeout\">2000</property> </protocol> <protocol type=\"MERGE3\"/> <protocol type=\"FD_SOCK\" socket-binding=\"jgroups-tcp-fd\"/> <protocol type=\"FD_ALL\"/> <protocol type=\"VERIFY_SUSPECT\"/> <protocol type=\"pbcast.NAKACK2\"> <property name=\"use_mcast_xmit\">false</property> </protocol> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG2\"/> </stack> </subsystem>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-Configure_JGroups |
Chapter 1. Red Hat OpenStack Services on OpenShift overview | Chapter 1. Red Hat OpenStack Services on OpenShift overview Red Hat OpenStack Services on OpenShift (RHOSO) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It is a scalable, fault-tolerant platform for the development of cloud-enabled workloads. The RHOSO control plane is hosted and managed as a workload on a Red Hat OpenShift Container Platform (RHOCP) cluster. The RHOSO data plane consists of external Red Hat Enterprise Linux (RHEL) nodes, managed with Red Hat Ansible Automation Platform, that host RHOSO workloads. The data plane nodes can be Compute nodes, Storage nodes, Networker nodes, or other types of nodes. The RHOSO IaaS cloud is implemented by a collection of interacting services that control its computing, storage, and networking resources. You can manage the cloud with a web-based interface to control, provision, and automate RHOSO resources. Additionally, an extensive API controls the RHOSO infrastructure and this API is also available to end users of the cloud. Note RHOSO only supports RHOCP master and worker nodes with processors based on a 64-bit x86 hardware architecture. 1.1. RHOSO services and Operators The Red Hat OpenStack Services on OpenShift (RHOSO) IaaS services are implemented as a collection of Operators running on a Red Hat OpenShift Container Platform (RHOCP) cluster. These Operators manage the compute, storage, networking, and other services for your RHOSO cloud. Important You use the Red Hat OpenShift Container Platform (RHOCP) OperatorHub to obtain all Operators. The OpenStack Operator ( openstack-operator ) installs all the service Operators detailed in the Services table, and is the interface that you use to manage those Operators. The OpenStack Operator also installs and manages the following Operators: openstack-baremetal-operator Used by the OpenStack Operator during the bare-metal node provisioning process. For more information on the functionality of each service, see the service-specific documentation on the Red Hat OpenStack Services on OpenShift 18.0 documentation portal. Table 1.1. Services Service Operator Default Description Bare Metal Provisioning (ironic) ironic-operator Disabled Supports physical machines for a variety of hardware vendors with hardware-specific drivers. Bare Metal Provisioning integrates with the Compute service to provision physical machines in the same way that virtual machines are provisioned, and provides a solution for the bare-metal-to-trusted-project use case. Block Storage (cinder) cinder-operator Enabled Provides and manages persistent block storage volumes for virtual machine instances. Compute (nova) nova-operator Enabled Provides management of the provisioning of compute resources, such as Virtual Machines, through the libvirt driver or physical servers through the ironic driver. Dashboard (horizon) horizon-operator Disabled Provides a browser-based GUI dashboard for creating and managing cloud resources and user access. The Dashboard service provides Project, Admin, and Settings dashboards by default. You can configure the dashboard to interface with other products such as billing, monitoring, and additional management tools. DNS (designate) designate-operator Enabled Provides DNS-as-a-Service (DNSaaS) that manages DNS records and zones in the cloud. You can deploy BIND instances to contain DNS records, or you can integrate the DNS service into an existing BIND infrastructure. Can also be integrated with the RHOSO Networking service (neutron) to automatically create records for virtual machine instances, network ports, and floating IPs. Identity (keystone) keystone-operator Enabled Provides user authentication and authorization to all RHOSO services and for managing users, projects, and roles. Supports multiple authentication mechanisms, including username and password credentials, token-based systems, and AWS-style log-ins. Image (glance) glance-operator Enabled Registry service for storing resources such as virtual machine images and volume snapshots. Cloud users can add new images or take a snapshot of an existing instance for immediate storage. You can use the snapshots for backup or as templates for new instances. Key Management (barbican) barbican-operator Enabled Provides secure storage, provisioning and management of secrets such as passwords, encryption keys, and X.509 Certificates. This includes keying material such as Symmetric Keys, Asymmetric Keys, Certificates, and raw binary data. Load-balancing (octavia) octavia-operator Disabled Provides Load Balancing-as-a-Service (LBaaS) for the cloud that supports multiple provider drivers. The reference provider driver (Amphora provider driver) is an open-source, scalable, and highly available load balancing provider. It accomplishes its delivery of load balancing services by managing a fleet of virtual machines, collectively known as amphorae, which it creates on demand. MariaDB mariadb-operator Enabled Provides methods to deploy and manage MariaDB Galera clusters. Memcached infra-operator Enabled Provides methods for managing infrastructure. Networking (neutron) neutron-operator Enabled Provides Networking-as-a-Service (NaaS) through software-defined networking (SDN) in virtual compute environments. Handles the creation and management of a virtual networking infrastructure in the cloud, which includes networks, subnets, and routers. Object Storage (swift) swift-operator Enabled Provides efficient and durable storage of large amounts of data, including static entities such as videos, images, email messages, files, or instance images. Objects are stored as binaries on the underlying file system with metadata stored in the extended attributes of each file. OVN ovn-operator Enabled Provides methods to deploy and manage OVNs. Orchestration (heat) heat-operator Disabled Template-based orchestration engine that supports automatic creation of resource stacks. Provides templates to create and manage cloud resources such as storage, networking, instances, or applications. You can use the templates to create stacks, which are collections of resources. Placement (placement) placement-operator Enabled Provides methods to install and manage an OpenStack Placement installation. Telemetry (ceilometer, prometheus) telemetry-operator Enabled Provides user-level usage data for RHOSO clouds. You can use the data for customer billing, system monitoring, or alerts. Telemetry can collect data from notifications sent by existing RHOSO components such as Compute usage events, or by polling RHOSO infrastructure resources such as libvirt. RabbitMQ rabbitmq-cluster-operator Enabled Provides methods to deploy and manage RabbitMQ clusters. Shared File Systems (manila) manila-operator Disabled Provisions shared file systems that can be used by multiple virtual machine instances, bare-metal nodes, or containers. 1.2. Features of a RHOSO environment The basic architecture of a Red Hat OpenStack Services on OpenShift (RHOSO) environment includes the following features: Container-native application delivery RHOSO is delivered by using a container-native approach that spans the Red Hat OpenShift Container Platform (RHOCP) and RHEL platforms to deliver a container-native RHOSO deployment. RHOCP-hosted services RHOCP hosts infrastructure services and RHOSO controller services by using RHOCP Operators to provide lifecycle management. Ansible-managed RHEL-hosted services RHOSO workloads run on RHEL nodes that are managed by the OpenStack Operator. The OpenStack Operator runs Ansible jobs to configure the RHEL data plane nodes, such as the Compute nodes. RHOCP manages provisioning, DNS, and configuration management. Installer-provisioned infrastructure The RHOSO installer enables installer-provisioned infrastructure that uses RHOSO bare-metal machine management to provision the Compute nodes for the RHOSO cloud. User-provisioned infrastructure If you have your own machine ingest and provisioning workflow, you can use the RHOSO pre-provisioned model to add your pre-provisioned hardware into your RHOSO environment, while receiving the benefits of a container-native workflow. Hosted RHOSO client RHOSO provides a host openstackclient pod that is preconfigured with administrator access to the deployed RHOSO environment. 1.3. RHOSO 18.0 known limitations The following list details the limitations of Red Hat OpenStack Services on OpenShift (RHOSO). Known limitations are features that are not supported in RHOSO. Compute service (nova): Off-path Network Backends are not supported in RHOSO 18.0. For more information, see Integration With Off-path Network Backends . Customizing policies are not supported. If you require custom policies, contact Red Hat for a support exception. The following packages are not supported in RHOSO: nova-serialproxy nova-spicehtml5proxy File injection of personality files to inject user data into virtual machine instances. As a workaround, users can pass data to their instances by using the --user-data option to run a script during instance boot, or set instance metadata by using the --property option when launching an instance. For more information, see Creating a customized instance . Persistent memory for instances (vPMEM). You can create persistent memory namespaces only on Compute nodes that have NVDIMM hardware. Red Hat has removed support for persistent memory from RHOSP 17.0 and later in response to the announcement by the Intel Corporation on July 28, 2022 that they are discontinuing investment in their Intel(R) OptaneTM business. For more information, see Intel(R) OptaneTM Business Update: What Does This Mean for Warranty and Support . QEMU emulation of non-native architectures. LVM is not supported as an image back end. The ploop image format is not supported. NFS versions earlier than 4. Image service (glance): RHOSO supports only one architecture, x86_64. There is no valid use case that requires this to be set for an RHOSO cloud, so all hosts will be x86_64. NFS versions earlier than 4. Block Storage service (cinder): Cinder replication. LVM driver. NFS versions earlier than 4. If you require support for any of these features, contact the Red Hat Customer Experience and Engagement team to discuss a support exception, if applicable, or other options. 1.4. Supported topologies for a RHOSO environment Red Hat OpenStack Services on OpenShift (RHOSO) supports a compact control plane topology and a dedicated nodes control plane topology. In a compact topology, the RHOSO control plane and the Red Hat OpenShift Container Platform (RHOCP) control plane share the same physical nodes. In a dedicated nodes topology, the RHOCP control plane runs on one set of physical nodes and the RHOSO control plane runs on another set of physical nodes. 1.4.1. Compact topology The compact RHOSO topology is the default, and consists of the following components: OpenShift compact cluster A Red Hat OpenShift cluster that hosts both the RHOSO and the RHOCP control planes. The RHOSO control plane consists of the OpenStack controller services pods that consist of services such as the Compute service (nova), the Networking service (neutron), and so on. The OpenShift control plane hosts the pods that run the following services required for RHOCP: OpenShift services, Kubernetes services, networking components, Cluster Version Operator, and etcd. For more information, see Introduction to OpenShift Container Platform in the RHOCP Architecture guide RHOSO data plane The RHOSO data plane consists of OpenStack Compute nodes. Nodes dedicated to storage are optional. Figure 1.1. Compact RHOSO topology 1.4.2. Dedicated nodes topology The dedicated nodes RHOSO topology differs from the compact topology in that there is a separate node cluster for the RHOSO control plane and a separate node cluster for the OpenShift control plane. Figure 1.2. Dedicated nodes RHOSO topology | null | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/assembly_red-hat-openstack-services-on-openshift-overview |
Chapter 15. Destroying a hosted cluster | Chapter 15. Destroying a hosted cluster 15.1. Destroying a hosted cluster on AWS You can destroy a hosted cluster and its managed cluster resource on Amazon Web Services (AWS) by using the command-line interface (CLI). 15.1.1. Destroying a hosted cluster on AWS by using the CLI You can use the command-line interface (CLI) to destroy a hosted cluster on Amazon Web Services (AWS). Procedure Delete the managed cluster resource on multicluster engine Operator by running the following command: USD oc delete managedcluster <hosted_cluster_name> 1 1 Replace <hosted_cluster_name> with the name of your cluster. Delete the hosted cluster and its backend resources by running the following command: USD hcp destroy cluster aws \ --name <hosted_cluster_name> \ 1 --infra-id <infra_id> \ 2 --role-arn <arn_role> \ 3 --sts-creds <path_to_sts_credential_file> \ 4 --base-domain <basedomain> 5 1 Specify the name of your hosted cluster, for instance, example . 2 Specify the infrastructure name for your hosted cluster. 3 Specify the Amazon Resource Name (ARN), for example, arn:aws:iam::820196288204:role/myrole . 4 Specify the path to your AWS Security Token Service (STS) credentials file, for example, /home/user/sts-creds/sts-creds.json . 5 Specify your base domain, for example, example.com . Important If your session token for AWS Security Token Service (STS) is expired, retrieve the STS credentials in a JSON file named sts-creds.json by running the following command: USD aws sts get-session-token --output json > sts-creds.json 15.2. Destroying a hosted cluster on bare metal You can destroy hosted clusters on bare metal by using the command-line interface (CLI) or the multicluster engine Operator web console. 15.2.1. Destroying a hosted cluster on bare metal by using the CLI You can use the hcp command-line interface (CLI) to destroy a hosted cluster on bare metal. Procedure Delete the hosted cluster and its backend resources by running the following command: USD hcp destroy cluster agent --name <hosted_cluster_name> 1 1 Specify the name of your hosted cluster. 15.2.2. Destroying a hosted cluster on bare metal by using the web console You can use the multicluster engine Operator web console to destroy a hosted cluster on bare metal. Procedure In the console, click Infrastructure Clusters . On the Clusters page, select the cluster that you want to destroy. In the Actions menu, select Destroy clusters to remove the cluster. 15.3. Destroying a hosted cluster on OpenShift Virtualization You can destroy a hosted cluster and its managed cluster resource on OpenShift Virtualization by using the command-line interface (CLI). 15.3.1. Destroying a hosted cluster on OpenShift Virtualization by using the CLI You can use the command-line interface (CLI) to destroy a hosted cluster and its managed cluster resource on OpenShift Virtualization. Procedure Delete the managed cluster resource on multicluster engine Operator by running the following command: USD oc delete managedcluster <hosted_cluster_name> Delete the hosted cluster and its backend resources by running the following command: USD hcp destroy cluster kubevirt --name <hosted_cluster_name> 15.4. Destroying a hosted cluster on IBM Z You can destroy a hosted cluster on x86 bare metal with IBM Z compute nodes and its managed cluster resource by using the command-line interface (CLI). 15.4.1. Destroying a hosted cluster on x86 bare metal with IBM Z compute nodes To destroy a hosted cluster and its managed cluster on x86 bare metal with IBM Z compute nodes, you can use the command-line interface (CLI). Procedure Scale the NodePool object to 0 nodes by running the following command: USD oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> \ --replicas 0 After the NodePool object is scaled to 0 , the compute nodes are detached from the hosted cluster. In OpenShift Container Platform version 4.17, this function is applicable only for IBM Z with KVM. For z/VM and LPAR, you must delete the compute nodes manually. If you want to re-attach compute nodes to the cluster, you can scale up the NodePool object with the number of compute nodes that you want. For z/VM and LPAR to reuse the agents, you must re-create them by using the Discovery image. Important If the compute nodes are not detached from the hosted cluster or are stuck in the Notready state, delete the compute nodes manually by running the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig delete \ node <compute_node_name> Verify the status of the compute nodes by entering the following command: USD oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes After the compute nodes are detached from the hosted cluster, the status of the agents is changed to auto-assign . Delete the agents from the cluster by running the following command: USD oc -n <hosted_control_plane_namespace> delete agent <agent_name> Note You can delete the virtual machines that you created as agents after you delete the agents from the cluster. Destroy the hosted cluster by running the following command: USD hcp destroy cluster agent --name <hosted_cluster_name> \ --namespace <hosted_cluster_namespace> 15.5. Destroying a hosted cluster on IBM Power You can destroy a hosted cluster on IBM Power by using the command-line interface (CLI). 15.5.1. Destroying a hosted cluster on IBM Power by using the CLI To destroy a hosted cluster on IBM Power, you can use the hcp command-line interface (CLI). Procedure Delete the hosted cluster by running the following command: USD hcp destroy cluster agent --name=<hosted_cluster_name> \ 1 --namespace=<hosted_cluster_namespace> \ 2 --cluster-grace-period <duration> 3 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace <hosted_cluster_namespace> with the name of your hosted cluster namespace. 3 Specifies the duration to destroy the hosted cluster completely, for example, 20m0s . 15.6. Destroying a hosted cluster on non-bare-metal agent machines You can destroy hosted clusters on non-bare-metal agent machines by using the command-line interface (CLI) or the multicluster engine Operator web console. 15.6.1. Destroying a hosted cluster on non-bare-metal agent machines You can use the hcp command-line interface (CLI) to destroy a hosted cluster on non-bare-metal agent machines. Procedure Delete the hosted cluster and its backend resources by running the following command: USD hcp destroy cluster agent --name <hosted_cluster_name> 1 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 15.6.2. Destroying a hosted cluster on non-bare-metal agent machines by using the web console You can use the multicluster engine Operator web console to destroy a hosted cluster on non-bare-metal agent machines. Procedure In the console, click Infrastructure Clusters . On the Clusters page, select the cluster that you want to destroy. In the Actions menu, select Destroy clusters to remove the cluster. | [
"oc delete managedcluster <hosted_cluster_name> 1",
"hcp destroy cluster aws --name <hosted_cluster_name> \\ 1 --infra-id <infra_id> \\ 2 --role-arn <arn_role> \\ 3 --sts-creds <path_to_sts_credential_file> \\ 4 --base-domain <basedomain> 5",
"aws sts get-session-token --output json > sts-creds.json",
"hcp destroy cluster agent --name <hosted_cluster_name> 1",
"oc delete managedcluster <hosted_cluster_name>",
"hcp destroy cluster kubevirt --name <hosted_cluster_name>",
"oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 0",
"oc --kubeconfig <hosted_cluster_name>.kubeconfig delete node <compute_node_name>",
"oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodes",
"oc -n <hosted_control_plane_namespace> delete agent <agent_name>",
"hcp destroy cluster agent --name <hosted_cluster_name> --namespace <hosted_cluster_namespace>",
"hcp destroy cluster agent --name=<hosted_cluster_name> \\ 1 --namespace=<hosted_cluster_namespace> \\ 2 --cluster-grace-period <duration> 3",
"hcp destroy cluster agent --name <hosted_cluster_name> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/destroying-a-hosted-cluster |
35.3. Configuring an iface for iSCSI Offload | 35.3. Configuring an iface for iSCSI Offload By default, iscsiadm will create an iface configuration for each Chelsio, Broadcom, and ServerEngines port. To view available iface configurations, use the same command for doing so in software iSCSI, i.e. iscsiadm -m iface . Before using the iface of a network card for iSCSI offload, first set the IP address ( target_IP ) that the device should use. For ServerEngines devices that use the be2iscsi driver (i.e. ServerEngines iSCSI HBAs), the IP address is configured in the ServerEngines BIOS set up screen. For Chelsio and Broadcom devices, the procedure for configuring the IP address is the same as for any other iface setting. So to configure the IP address of the iface , use: Example 35.5. Set the iface IP address of a Chelsio card For example, to set the iface IP address of a Chelsio card (with iface name cxgb3i.00:07:43:05:97:07 ) to 20.15.0.66 , use: | [
"iscsiadm -m iface -I iface_name -o update -n iface.ipaddress -v target_IP",
"iscsiadm -m iface -I cxgb3i.00:07:43:05:97:07 -o update -n iface.ipaddress -v 20.15.0.66"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/iface-config-iscsi-offload |
Chapter 3. Providing DHCP services | Chapter 3. Providing DHCP services The dynamic host configuration protocol (DHCP) is a network protocol that automatically assigns IP information to clients. You can set up the dhcpd service to provide a DHCP server and DHCP relay in your network. 3.1. The difference between static and dynamic IP addressing Static IP addressing When you assign a static IP address to a device, the address does not change over time unless you change it manually. Use static IP addressing if you want: To ensure network address consistency for servers such as DNS, and authentication servers. To use out-of-band management devices that work independently of other network infrastructure. Dynamic IP addressing When you configure a device to use a dynamic IP address, the address can change over time. For this reason, dynamic addresses are typically used for devices that connect to the network occasionally because the IP address can be different after rebooting the host. Dynamic IP addresses are more flexible, easier to set up, and administer. The Dynamic Host Control Protocol (DHCP) is a traditional method of dynamically assigning network configurations to hosts. Note There is no strict rule defining when to use static or dynamic IP addresses. It depends on user's needs, preferences, and the network environment. 3.2. DHCP transaction phases The DHCP works in four phases: Discovery, Offer, Request, Acknowledgement, also called the DORA process. DHCP uses this process to provide IP addresses to clients. Discovery The DHCP client sends a message to discover the DHCP server in the network. This message is broadcasted at the network and data link layer. Offer The DHCP server receives messages from the client and offers an IP address to the DHCP client. This message is unicast at the data link layer but broadcast at the network layer. Request The DHCP client requests the DHCP server for the offered IP address. This message is unicast at the data link layer but broadcast at the network layer. Acknowledgment The DHCP server sends an acknowledgment to the DHCP client. This message is unicast at the data link layer but broadcast at the network layer. It is the final message of the DHCP DORA process. 3.3. The differences when using dhcpd for DHCPv4 and DHCPv6 The dhcpd service supports providing both DHCPv4 and DHCPv6 on one server. However, you need a separate instance of dhcpd with separate configuration files to provide DHCP for each protocol. DHCPv4 Configuration file: /etc/dhcp/dhcpd.conf Systemd service name: dhcpd DHCPv6 Configuration file: /etc/dhcp/dhcpd6.conf Systemd service name: dhcpd6 3.4. The lease database of the dhcpd service A DHCP lease is the period for which the dhcpd service allocates a network address to a client. The dhcpd service stores the DHCP leases in the following databases: For DHCPv4: /var/lib/dhcpd/dhcpd.leases For DHCPv6: /var/lib/dhcpd/dhcpd6.leases Warning Manually updating the database files can corrupt the databases. The lease databases contain information about the allocated leases, such as the IP address assigned to a media access control (MAC) address or the time stamp when the lease expires. Note that all time stamps in the lease databases are in Coordinated Universal Time (UTC). The dhcpd service recreates the databases periodically: The service renames the existing files: /var/lib/dhcpd/dhcpd.leases to /var/lib/dhcpd/dhcpd.leases~ /var/lib/dhcpd/dhcpd6.leases to /var/lib/dhcpd/dhcpd6.leases~ The service writes all known leases to the newly created /var/lib/dhcpd/dhcpd.leases and /var/lib/dhcpd/dhcpd6.leases files. Additional resources dhcpd.leases(5) man page on your system Restoring a corrupt lease database 3.5. Comparison of DHCPv6 to radvd In an IPv6 network, only router advertisement messages provide information about an IPv6 default gateway. As a consequence, if you want to use DHCPv6 in subnets that require a default gateway setting, you must additionally configure a router advertisement service, such as Router Advertisement Daemon ( radvd ). The radvd service uses flags in router advertisement packets to announce the availability of a DHCPv6 server. The following table compares features of DHCPv6 and radvd : DHCPv6 radvd Provides information about the default gateway no yes Guarantees random addresses to protect privacy yes no Sends further network configuration options yes no Maps media access control (MAC) addresses to IPv6 addresses yes no 3.6. Configuring the radvd service for IPv6 routers The router advertisement daemon ( radvd ) sends router advertisement messages that are required for IPv6 stateless autoconfiguration. This enables users to automatically configure their addresses, settings, routes, and to choose a default router based on these advertisements. Note You can only set /64 prefixes in the radvd service. To use other prefixes, use DHCPv6. Prerequisites You are logged in as the root user. Procedure Install the radvd package: Edit the /etc/radvd.conf file, and add the following configuration: These settings configures radvd to send router advertisement messages on the enp1s0 device for the 2001:db8:0:1::/64 subnet. The AdvManagedFlag on setting defines that the client should receive the IP address from a DHCP server, and the AdvOtherConfigFlag parameter set to on defines that clients should receive non-address information from the DHCP server as well. Optional: Configure that radvd automatically starts when the system boots: Start the radvd service: Verficiation Display the content of router advertisement packages and the configured values radvd sends: Additional resources radvd.conf(5) man page on your system /usr/share/doc/radvd/radvd.conf.example file Can I use a prefix length other than 64 bits in IPv6 Router Advertisements? (Red Hat Knowledgebase) 3.7. Setting network interfaces for the DHCP servers By default, the dhcpd service processes requests only on network interfaces that have an IP address in the subnet defined in the configuration file of the service. For example, in the following scenario, dhcpd listens only on the enp0s1 network interface: You have only a subnet definition for the 192.0.2.0/24 network in the /etc/dhcp/dhcpd.conf file. The enp0s1 network interface is connected to the 192.0.2.0/24 subnet. The enp7s0 interface is connected to a different subnet. Only follow this procedure if the DHCP server contains multiple network interfaces connected to the same network but the service should listen only on specific interfaces. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Copy the /usr/lib/systemd/system/dhcpd.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcpd.service file. Future updates of the dhcp-server package can override the changes. Edit the /etc/systemd/system/dhcpd.service file, and append the names of the interface, that dhcpd should listen on to the command in the ExecStart parameter: This example configures that dhcpd listens only on the enp0s1 and enp7s0 interfaces. Reload the systemd manager configuration: Restart the dhcpd service: For IPv6 networks: Copy the /usr/lib/systemd/system/dhcpd6.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcpd6.service file. Future updates of the dhcp-server package can override the changes. Edit the /etc/systemd/system/dhcpd6.service file, and append the names of the interface, that dhcpd should listen on to the command in the ExecStart parameter: This example configures that dhcpd listens only on the enp0s1 and enp7s0 interfaces. Reload the systemd manager configuration: Restart the dhcpd6 service: 3.8. Setting up the DHCP service for subnets directly connected to the DHCP server Use the following procedure if the DHCP server is directly connected to the subnet for which the server should answer DHCP requests. This is the case if a network interface of the server has an IP address of this subnet assigned. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. For each IPv4 subnet directly connected to an interface of the server, add a subnet declaration: This example adds a subnet declaration for the 192.0.2.0/24 network. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from this subnet: A free IPv4 address from the range defined in the range parameter IP of the DNS server for this subnet: 192.0.2.1 Default gateway for this subnet: 192.0.2.1 Broadcast address for this subnet: 192.0.2.255 The maximum lease time, after which clients in this subnet release the IP and send a new request to the server: 172800 seconds (2 days) Optional: Configure that dhcpd starts automatically when the system boots: Start the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. For each IPv6 subnet directly connected to an interface of the server, add a subnet declaration: This example adds a subnet declaration for the 2001:db8:0:1::/64 network. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from this subnet: A free IPv6 address from the range defined in the range6 parameter. The IP of the DNS server for this subnet is 2001:db8:0:1::1 . The maximum lease time, after which clients in this subnet release the IP and send a new request to the server is 172800 seconds (2 days). Note that IPv6 requires uses router advertisement messages to identify the default gateway. Optional: Configure that dhcpd6 starts automatically when the system boots: Start the dhcpd6 service: Additional resources dhcp-options(5) and dhcpd.conf(5) man pages on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.9. Setting up the DHCP service for subnets that are not directly connected to the DHCP server Use the following procedure if the DHCP server is not directly connected to the subnet for which the server should answer DHCP requests. This is the case if a DHCP relay agent forwards requests to the DHCP server, because none of the DHCP server's interfaces is directly connected to the subnet the server should serve. Depending on whether you want to provide DHCP for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. The dhcp-server package is installed. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. Add a shared-network declaration, such as the following, for IPv4 subnets that are not directly connected to an interface of the server: This example adds a shared network declaration, that contains a subnet declaration for both the 192.0.2.0/24 and 198.51.100.0/24 networks. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from one of these subnets: The IP of the DNS server for clients from both subnets is: 192.0.2.1 . A free IPv4 address from the range defined in the range parameter, depending on from which subnet the client sent the request. The default gateway is either 192.0.2.1 or 198.51.100.1 depending on from which subnet the client sent the request. Add a subnet declaration for the subnet the server is directly connected to and that is used to reach the remote subnets specified in shared-network above: Note If the server does not provide DHCP service to this subnet, the subnet declaration must be empty as shown in the example. Without a declaration for the directly connected subnet, dhcpd does not start. Optional: Configure that dhcpd starts automatically when the system boots: Start the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Optional: Add global parameters that dhcpd uses as default if no other directives contain these settings: This example sets the default domain name for the connection to example.com , and the default lease time to 86400 seconds (1 day). Add the authoritative statement on a new line: Important Without the authoritative statement, the dhcpd service does not answer DHCPREQUEST messages with DHCPNAK if a client asks for an address that is outside of the pool. Add a shared-network declaration, such as the following, for IPv6 subnets that are not directly connected to an interface of the server: This example adds a shared network declaration that contains a subnet6 declaration for both the 2001:db8:0:1::1:0/120 and 2001:db8:0:1::2:0/120 networks. With this configuration, the DHCP server assigns the following settings to a client that sends a DHCP request from one of these subnets: The IP of the DNS server for clients from both subnets is 2001:db8:0:1::1:1 . A free IPv6 address from the range defined in the range6 parameter, depending on from which subnet the client sent the request. Note that IPv6 requires uses router advertisement messages to identify the default gateway. Add a subnet6 declaration for the subnet the server is directly connected to and that is used to reach the remote subnets specified in shared-network above: Note If the server does not provide DHCP service to this subnet, the subnet6 declaration must be empty as shown in the example. Without a declaration for the directly connected subnet, dhcpd does not start. Optional: Configure that dhcpd6 starts automatically when the system boots: Start the dhcpd6 service: Additional resources dhcp-options(5) and dhcpd.conf(5) man pages on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file Setting up a DHCP relay agent 3.10. Assigning a static address to a host using DHCP Using a host declaration, you can configure the DHCP server to assign a fixed IP address to a media access control (MAC) address of a host. For example, use this method to always assign the same IP address to a server or network device. Depending on whether you want to configure fixed addresses for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites The dhcpd service is configured and running. You are logged in as the root user. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Add a host declaration: This example configures the DHCP server to always assign the 192.0.2.130 IP address to the host with the 52:54:00:72:2f:6e MAC address. The dhcpd service identifies systems by the MAC address specified in the fixed-address parameter, and not by the name in the host declaration. As a consequence, you can set this name to any string that does not match other host declarations. To configure the same system for multiple networks, use a different name, otherwise, dhcpd fails to start. Optional: Add further settings to the host declaration that are specific for this host. Restart the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Add a host declaration: This example configures the DHCP server to always assign the 2001:db8:0:1::20 IP address to the host with the 52:54:00:72:2f:6e MAC address. The dhcpd service identifies systems by the MAC address specified in the fixed-address6 parameter, and not by the name in the host declaration. As a consequence, you can set this name to any string, provided that it is unique to other host declarations. To configure the same system for multiple networks, use a different name because, otherwise, dhcpd fails to start. Optional: Add further settings to the host declaration that are specific for this host. Restart the dhcpd6 service: Additional resources dhcp-options(5) man page on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.11. Using a group declaration to apply parameters to multiple hosts, subnets, and shared networks at the same time Using a group declaration, you can apply the same parameters to multiple hosts, subnets, and shared networks. Note that the procedure describes using a group declaration for hosts, but the steps are the same for subnets and shared networks. Depending on whether you want to configure a group for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites The dhcpd service is configured and running. You are logged in as the root user. Procedure For IPv4 networks: Edit the /etc/dhcp/dhcpd.conf file: Add a group declaration: This group definition groups two host entries. The dhcpd service applies the value set in the option domain-name-servers parameter to both hosts in the group. Optional: Add further settings to the group declaration that are specific for these hosts. Restart the dhcpd service: For IPv6 networks: Edit the /etc/dhcp/dhcpd6.conf file: Add a group declaration: This group definition groups two host entries. The dhcpd service applies the value set in the option dhcp6.domain-search parameter to both hosts in the group. Optional: Add further settings to the group declaration that are specific for these hosts. Restart the dhcpd6 service: Additional resources dhcp-options(5) man page on your system /usr/share/doc/dhcp-server/dhcpd.conf.example file /usr/share/doc/dhcp-server/dhcpd6.conf.example file 3.12. Restoring a corrupt lease database If the DHCP server logs an error that is related to the lease database, such as Corrupt lease file - possible data loss! ,you can restore the lease database from the copy the dhcpd service created. Note that this copy might not reflect the latest status of the database. Warning If you remove the lease database instead of replacing it with a backup, you lose all information about the currently assigned leases. As a consequence, the DHCP server could assign leases to clients that have been previously assigned to other hosts and are not expired yet. This leads to IP conflicts. Depending on whether you want to restore the DHCPv4, DHCPv6, or both databases, see the procedure for: Restoring the DHCPv4 lease database Restoring the DHCPv6 lease database Prerequisites You are logged in as the root user. The lease database is corrupt. Procedure Restoring the DHCPv4 lease database: Stop the dhcpd service: Rename the corrupt lease database: Restore the copy of the lease database that the dhcp service created when it refreshed the lease database: Important If you have a more recent backup of the lease database, restore this backup instead. Start the dhcpd service: Restoring the DHCPv6 lease database: Stop the dhcpd6 service: Rename the corrupt lease database: Restore the copy of the lease database that the dhcp service created when it refreshed the lease database: Important If you have a more recent backup of the lease database, restore this backup instead. Start the dhcpd6 service: Additional resources The lease database of the dhcpd service 3.13. Setting up a DHCP relay agent The DHCP Relay Agent ( dhcrelay ) enables the relay of DHCP and BOOTP requests from a subnet with no DHCP server on it to one or more DHCP servers on other subnets. When a DHCP client requests information, the DHCP Relay Agent forwards the request to the list of DHCP servers specified. When a DHCP server returns a reply, the DHCP Relay Agent forwards this request to the client. Depending on whether you want to set up a DHCP relay for IPv4, IPv6, or both protocols, see the procedure for: IPv4 networks IPv6 networks Prerequisites You are logged in as the root user. Procedure For IPv4 networks: Install the dhcp-relay package: Copy the /lib/systemd/system/dhcrelay.service file to the /etc/systemd/system/ directory: Do not edit the /usr/lib/systemd/system/dhcrelay.service file. Future updates of the dhcp-relay package can override the changes. Edit the /etc/systemd/system/dhcrelay.service file, and append the -i interface parameter, together with a list of IP addresses of DHCPv4 servers that are responsible for the subnet: With these additional parameters, dhcrelay listens for DHCPv4 requests on the enp1s0 interface and forwards them to the DHCP server with the IP 192.0.2.1 . Reload the systemd manager configuration: Optional: Configure that the dhcrelay service starts when the system boots: Start the dhcrelay service: For IPv6 networks: Install the dhcp-relay package: Copy the /lib/systemd/system/dhcrelay.service file to the /etc/systemd/system/ directory and name the file dhcrelay6.service : Do not edit the /usr/lib/systemd/system/dhcrelay.service file. Future updates of the dhcp-relay package can override the changes. Edit the /etc/systemd/system/dhcrelay6.service file, and append the -l receiving_interface and -u outgoing_interface parameters: With these additional parameters, dhcrelay listens for DHCPv6 requests on the enp1s0 interface and forwards them to the network connected to the enp7s0 interface. Reload the systemd manager configuration: Optional: Configure that the dhcrelay6 service starts when the system boots: Start the dhcrelay6 service: Additional resources dhcrelay(8) man page on your system | [
"dnf install radvd",
"interface enp1s0 { AdvSendAdvert on; AdvManagedFlag on; AdvOtherConfigFlag on; prefix 2001:db8:0:1::/64 { }; };",
"systemctl enable radvd",
"systemctl start radvd",
"radvdump",
"cp /usr/lib/systemd/system/dhcpd.service /etc/systemd/system/",
"ExecStart=/usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0",
"systemctl daemon-reload",
"systemctl restart dhcpd.service",
"cp /usr/lib/systemd/system/dhcpd6.service /etc/systemd/system/",
"ExecStart=/usr/sbin/dhcpd -f -6 -cf /etc/dhcp/dhcpd6.conf -user dhcpd -group dhcpd --no-pid USDDHCPDARGS enp0s1 enp7s0",
"systemctl daemon-reload",
"systemctl restart dhcpd6.service",
"option domain-name \"example.com\"; default-lease-time 86400;",
"authoritative;",
"subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option domain-name-servers 192.0.2.1; option routers 192.0.2.1; option broadcast-address 192.0.2.255; max-lease-time 172800; }",
"systemctl enable dhcpd",
"systemctl start dhcpd",
"option dhcp6.domain-search \"example.com\"; default-lease-time 86400;",
"authoritative;",
"subnet6 2001:db8:0:1::/64 { range6 2001:db8:0:1::20 2001:db8:0:1::100; option dhcp6.name-servers 2001:db8:0:1::1; max-lease-time 172800; }",
"systemctl enable dhcpd6",
"systemctl start dhcpd6",
"option domain-name \"example.com\"; default-lease-time 86400;",
"authoritative;",
"shared-network example { option domain-name-servers 192.0.2.1; subnet 192.0.2.0 netmask 255.255.255.0 { range 192.0.2.20 192.0.2.100; option routers 192.0.2.1; } subnet 198.51.100.0 netmask 255.255.255.0 { range 198.51.100.20 198.51.100.100; option routers 198.51.100.1; } }",
"subnet 203.0.113.0 netmask 255.255.255.0 { }",
"systemctl enable dhcpd",
"systemctl start dhcpd",
"option dhcp6.domain-search \"example.com\"; default-lease-time 86400;",
"authoritative;",
"shared-network example { option domain-name-servers 2001:db8:0:1::1:1 subnet6 2001:db8:0:1::1:0/120 { range6 2001:db8:0:1::1:20 2001:db8:0:1::1:100 } subnet6 2001:db8:0:1::2:0/120 { range6 2001:db8:0:1::2:20 2001:db8:0:1::2:100 } }",
"subnet6 2001:db8:0:1::50:0/120 { }",
"systemctl enable dhcpd6",
"systemctl start dhcpd6",
"host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; }",
"systemctl start dhcpd",
"host server.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address6 2001:db8:0:1::200; }",
"systemctl start dhcpd6",
"group { option domain-name-servers 192.0.2.1; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 192.0.2.130; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 192.0.2.140; } }",
"systemctl start dhcpd",
"group { option dhcp6.domain-search \"example.com\"; host server1.example.com { hardware ethernet 52:54:00:72:2f:6e; fixed-address 2001:db8:0:1::200; } host server2.example.com { hardware ethernet 52:54:00:1b:f3:cf; fixed-address 2001:db8:0:1::ba3; } }",
"systemctl start dhcpd6",
"systemctl stop dhcpd",
"mv /var/lib/dhcpd/dhcpd.leases /var/lib/dhcpd/dhcpd.leases.corrupt",
"cp -p /var/lib/dhcpd/dhcpd.leases~ /var/lib/dhcpd/dhcpd.leases",
"systemctl start dhcpd",
"systemctl stop dhcpd6",
"mv /var/lib/dhcpd/dhcpd6.leases /var/lib/dhcpd/dhcpd6.leases.corrupt",
"cp -p /var/lib/dhcpd/dhcpd6.leases~ /var/lib/dhcpd/dhcpd6.leases",
"systemctl start dhcpd6",
"dnf install dhcp-relay",
"cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/",
"ExecStart=/usr/sbin/dhcrelay -d --no-pid -i enp1s0 192.0.2.1",
"systemctl daemon-reload",
"systemctl enable dhcrelay.service",
"systemctl start dhcrelay.service",
"dnf install dhcp-relay",
"cp /lib/systemd/system/dhcrelay.service /etc/systemd/system/dhcrelay6.service",
"ExecStart=/usr/sbin/dhcrelay -d --no-pid -l enp1s0 -u enp7s0",
"systemctl daemon-reload",
"systemctl enable dhcrelay6.service",
"systemctl start dhcrelay6.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/providing-dhcp-services_networking-infrastructure-services |
17.2.2. Option Fields | 17.2.2. Option Fields In addition to basic rules allowing and denying access, the Red Hat Enterprise Linux implementation of TCP wrappers supports extensions to the access control language through option fields. By using option fields within hosts access rules, administrators can accomplish a variety of tasks such as altering log behavior, consolidating access control, and launching shell commands. 17.2.2.1. Logging Option fields let administrators easily change the log facility and priority level for a rule by using the severity directive. In the following example, connections to the SSH daemon from any host in the example.com domain are logged to the default authpriv syslog facility (because no facility value is specified) with a priority of emerg : It is also possible to specify a facility using the severity option. The following example logs any SSH connection attempts by hosts from the example.com domain to the local0 facility with a priority of alert : Note In practice, this example does not work until the syslog daemon ( syslogd ) is configured to log to the local0 facility. Refer to the syslog.conf man page for information about configuring custom log facilities. | [
"sshd : .example.com : severity emerg",
"sshd : .example.com : severity local0.alert"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-tcpwrappers-access-rules-options |
2.7.2. VPN Configurations Using Libreswan | 2.7.2. VPN Configurations Using Libreswan Libreswan does not use the terms " source " or " destination " . Instead, it uses the terms " left " and " right " to refer to end points (the hosts). This allows the same configuration to be used on both end points in most cases, although most administrators use " left " for the local host and " right " for the remote host. There are three commonly used methods for authentication of endpoints: Raw RSA keys are commonly used for static host-to-host or subnet-to-subnet IPsec configurations. The hosts are manually configured with each other's public RSA key. This method does not scale well when dozens or more hosts all need to setup IPsec tunnels to each other. X.509 certificates are commonly used for large scale deployments where there are many hosts that need to connect to a common IPsec gateway. A central certificate authority ( CA ) is used to sign RSA certificates for hosts or users. This central CA is responsible for relaying trust, including the revocations of individual hosts or users. Pre-Shared Keys ( PSK ) is the simplest authentication method. PSK's should consist of random characters and have a length of at least 20 characters. Due to the dangers of non-random and short PSKs, this is the least secure form of authentication and it is recommended to use either raw RSA keys or certificate based authentication instead. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/vpn_configurations_using_libreswan |
Chapter 7. Supported integration products | Chapter 7. Supported integration products AMQ Streams 1.6 supports integration with the following Red Hat products. Red Hat Single Sign-On 7.4 and later for OAuth 2.0 authentication and OAuth 2.0 authorization Red Hat Debezium 1.0 and later for monitoring databases and creating event streams For information on the functionality these products can introduce to your AMQ Streams deployment, refer to the AMQ Streams 1.6 documentation. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/supported-config-str |
4.18. Logging | 4.18. Logging The dateext option is now enabled by default in /etc/logrotate.conf . This option archives old versions of log files by adding an extension representing the date (in YYYYMMDD format). Previously, a number was appended to files. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-networking-logging |
Chapter 60. project | Chapter 60. project This chapter describes the commands under the project command. 60.1. project cleanup Clean resources associated with a project Usage: Table 60.1. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --created-before <YYYY-MM-DDTHH24:MI:SS> Drop resources created before the given time --updated-before <YYYY-MM-DDTHH24:MI:SS> Drop resources updated before the given time --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 60.2. project create Create new project Usage: Table 60.2. Positional arguments Value Summary <project-name> New project name Table 60.3. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning the project (name or id) --parent <project> Parent of the project (name or id) --description <description> Project description --enable Enable project --disable Disable project --property <key=value> Add a property to <name> (repeat option to set multiple properties) --or-show Return existing project --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) Table 60.4. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 60.5. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.6. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.3. project delete Delete project(s) Usage: Table 60.8. Positional arguments Value Summary <project> Project(s) to delete (name or id) Table 60.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) 60.4. project list List projects Usage: Table 60.10. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter projects by <domain> (name or id) --parent <parent> Filter projects whose parent is <parent> (name or id) --user <user> Filter projects by <user> (name or id) --my-projects List projects for the authenticated user. supersedes other filters. --long List additional fields in output --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc), repeat this option to specify multiple keys and directions. --tags <tag>[,<tag>,... ] List projects which have all given tag(s) (comma- separated list of tags) --tags-any <tag>[,<tag>,... ] List projects which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude projects which have all given tag(s) (comma- separated list of tags) --not-tags-any <tag>[,<tag>,... ] Exclude projects which have any given tag(s) (comma- separated list of tags) Table 60.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 60.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 60.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.5. project purge Clean resources associated with a project Usage: Table 60.15. Command arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --keep-project Clean project resources, but don't delete the project --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 60.6. project set Set project properties Usage: Table 60.16. Positional arguments Value Summary <project> Project to modify (name or id) Table 60.17. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set project name --domain <domain> Domain owning <project> (name or id) --description <description> Set project description --enable Enable project --disable Disable project --property <key=value> Set a property on <project> (repeat option to set multiple properties) --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) --clear-tags Clear tags associated with the project. specify both --tag and --clear-tags to overwrite current tags --remove-tag <tag> Tag to be deleted from the project (repeat option to delete multiple tags) 60.7. project show Display project details Usage: Table 60.18. Positional arguments Value Summary <project> Project to display (name or id) Table 60.19. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) --parents Show the project's parents as a list --children Show project's subtree (children) as a list Table 60.20. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 60.21. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.22. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.23. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack project cleanup [-h] [--dry-run] (--auth-project | --project <project>) [--created-before <YYYY-MM-DDTHH24:MI:SS>] [--updated-before <YYYY-MM-DDTHH24:MI:SS>] [--project-domain <project-domain>]",
"openstack project create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parent <project>] [--description <description>] [--enable | --disable] [--property <key=value>] [--or-show] [--immutable | --no-immutable] [--tag <tag>] <project-name>",
"openstack project delete [-h] [--domain <domain>] <project> [<project> ...]",
"openstack project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--domain <domain>] [--parent <parent>] [--user <user>] [--my-projects] [--long] [--sort <key>[:<direction>]] [--tags <tag>[,<tag>,...]] [--tags-any <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-tags-any <tag>[,<tag>,...]]",
"openstack project purge [-h] [--dry-run] [--keep-project] (--auth-project | --project <project>) [--project-domain <project-domain>]",
"openstack project set [-h] [--name <name>] [--domain <domain>] [--description <description>] [--enable | --disable] [--property <key=value>] [--immutable | --no-immutable] [--tag <tag>] [--clear-tags] [--remove-tag <tag>] <project>",
"openstack project show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parents] [--children] <project>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/project |
Part III. Administration: Managing Servers | Part III. Administration: Managing Servers This part covers administration-related topics, such as managing the Identity Management server and services and replication between servers in an Identity Management domain, provides details on the Identity Management topology and gives instructions on how to update the Identity Management packages on the system. Furthermore, this part explains how to manually back up and restore the Identity Management system in case of a disaster affecting an Identity Management deployment. The final chapter details the different internal access control mechanisms. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.admin-guide-servers |
3.3. Listing Clusters | 3.3. Listing Clusters This Ruby example lists the clusters. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of clusters: cls_service = system_service.clusters_service # Retrieve the list of clusters and for each one # print its name: cls = cls_service.list cls.each do |cl| puts cl.name end In an environment with only the Default cluster, the example outputs: For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/ClustersService:list . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of clusters: cls_service = system_service.clusters_service Retrieve the list of clusters and for each one print its name: cls = cls_service.list cls.each do |cl| puts cl.name end",
"Default"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/listing_clusters |
Chapter 1. Ceph dashboard overview | Chapter 1. Ceph dashboard overview As a storage administrator, the Red Hat Ceph Storage Dashboard provides management and monitoring capabilities, allowing you to administer and configure the cluster, as well as visualize information and performance statistics related to it. The dashboard uses a web server hosted by the ceph-mgr daemon. The dashboard is accessible from a web browser and includes many useful management and monitoring features, for example, to configure manager modules and monitor the state of OSDs. 1.1. Prerequisites System administrator level experience. 1.2. Ceph Dashboard components The functionality of the dashboard is provided by multiple components. The Cephadm application for deployment. The embedded dashboard ceph-mgr module. The embedded Prometheus ceph-mgr module. The Prometheus time-series database. The Prometheus node-exporter daemon, running on each host of the storage cluster. The Grafana platform to provide monitoring user interface and alerting. Additional Resources For more information, see the Prometheus website . For more information, see the Grafana website . 1.3. Ceph Dashboard features The Ceph dashboard provides the following features: Multi-user and role management : The dashboard supports multiple user accounts with different permissions and roles. User accounts and roles can be managed using both, the command line and the web user interface. The dashboard supports various methods to enhance password security. Password complexity rules may be configured, requiring users to change their password after the first login or after a configurable time period. Single Sign-On (SSO) : The dashboard supports authentication with an external identity provider using the SAML 2.0 protocol. Auditing : The dashboard backend can be configured to log all PUT, POST and DELETE API requests in the Ceph manager log. Management features View cluster hierarchy : You can view the CRUSH map, for example, to determine which host a specific OSD ID is running on. This is helpful if there is an issue with an OSD. Configure manager modules : You can view and change parameters for Ceph manager modules. Embedded Grafana Dashboards : Ceph Dashboard Grafana dashboards might be embedded in external applications and web pages to surface information and performance metrics gathered by the Prometheus module. View and filter logs : You can view event and audit cluster logs and filter them based on priority, keyword, date, or time range. Toggle dashboard components : You can enable and disable dashboard components so only the features you need are available. Manage OSD settings : You can set cluster-wide OSD flags using the dashboard. You can also Mark OSDs up, down or out, purge and reweight OSDs, perform scrub operations, modify various scrub-related configuration options, select profiles to adjust the level of backfilling activity. You can set and change the device class of an OSD, display and sort OSDs by device class. You can deploy OSDs on new drives and hosts. iSCSI management : Create, modify, and delete iSCSI targets. Viewing Alerts : The alerts page allows you to see details of current alerts. Quality of Service for images : You can set performance limits on images, for example limiting IOPS or read BPS burst rates. Monitoring features Username and password protection : You can access the dashboard only by providing a configurable user name and password. Overall cluster health : Displays performance and capacity metrics. This also displays the overall cluster status, storage utilization, for example, number of objects, raw capacity, usage per pool, a list of pools and their status and usage statistics. Hosts : Provides a list of all hosts associated with the cluster along with the running services and the installed Ceph version. Performance counters : Displays detailed statistics for each running service. Monitors : Lists all Monitors, their quorum status and open sessions. Configuration editor : Displays all the available configuration options, their descriptions, types, default, and currently set values. These values are editable. Cluster logs : Displays and filters the latest updates to the cluster's event and audit log files by priority, date, or keyword. Device management : Lists all hosts known by the Orchestrator. Lists all drives attached to a host and their properties. Displays drive health predictions, SMART data, and blink enclosure LEDs. View storage cluster capacity : You can view raw storage capacity of the Red Hat Ceph Storage cluster in the Capacity panels of the Ceph dashboard. Pools : Lists and manages all Ceph pools and their details. For example: applications, placement groups, replication size, EC profile, quotas, CRUSH ruleset, etc. OSDs : Lists and manages all OSDs, their status and usage statistics as well as detailed information like attributes, like OSD map, metadata, and performance counters for read and write operations. Lists all drives associated with an OSD. iSCSI : Lists all hosts that run the tcmu-runner service, displays all images and their performance characteristics, such as read and write operations or traffic amd also displays the iSCSI gateway status and information about active initiators. Images : Lists all RBD images and their properties such as size, objects, and features. Create, copy, modify and delete RBD images. Create, delete, and rollback snapshots of selected images, protect or unprotect these snapshots against modification. Copy or clone snapshots, flatten cloned images. Note The performance graph for I/O changes in the Overall Performance tab for a specific image shows values only after specifying the pool that includes that image by setting the rbd_stats_pool parameter in Cluster > Manager modules > Prometheus . RBD Mirroring : Enables and configures RBD mirroring to a remote Ceph server. Lists all active sync daemons and their status, pools and RBD images including their synchronization state. Ceph File Systems : Lists all active Ceph file system (CephFS) clients and associated pools, including their usage statistics. Evict active CephFS clients, manage CephFS quotas and snapshots, and browse a CephFS directory structure. Object Gateway (RGW) : Lists all active object gateways and their performance counters. Displays and manages, including add, edit, delete, object gateway users and their details, for example quotas, as well as the users' buckets and their details, for example, owner or quotas. NFS : Manages NFS exports of CephFS and Ceph object gateway S3 buckets using the NFS Ganesha. Security features SSL and TLS support : All HTTP communication between the web browser and the dashboard is secured via SSL. A self-signed certificate can be created with a built-in command, but it is also possible to import custom certificates signed and issued by a Certificate Authority (CA). Additional Resources See Toggling Ceph dashboard features in the Red Hat Ceph Storage Dashboard Guide for more information. 1.4. Red Hat Ceph Storage Dashboard architecture The Dashboard architecture depends on the Ceph manager dashboard plugin and other components. See the diagram below to understand how they work together. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/ceph-dashboard-overview |
Chapter 1. OperatorHub APIs | Chapter 1. OperatorHub APIs 1.1. CatalogSource [operators.coreos.com/v1alpha1] Description CatalogSource is a repository of CSVs, CRDs, and operator packages. Type object 1.2. ClusterServiceVersion [operators.coreos.com/v1alpha1] Description ClusterServiceVersion is a Custom Resource of type ClusterServiceVersionSpec . Type object 1.3. InstallPlan [operators.coreos.com/v1alpha1] Description InstallPlan defines the installation of a set of operators. Type object 1.4. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object 1.5. Operator [operators.coreos.com/v1] Description Operator represents a cluster operator. Type object 1.6. OperatorCondition [operators.coreos.com/v2] Description OperatorCondition is a Custom Resource of type OperatorCondition which is used to convey information to OLM about the state of an operator. Type object 1.7. OperatorGroup [operators.coreos.com/v1] Description OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. Type object 1.8. PackageManifest [packages.operators.coreos.com/v1] Description PackageManifest holds information about a package, which is a reference to one (or more) channels under a single package. Type object 1.9. Subscription [operators.coreos.com/v1alpha1] Description Subscription keeps operators up to date by tracking changes to Catalogs. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/operatorhub-apis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.